2015-07-16 17:23:02

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 00/22] tracing: 'hist' triggers

This is v9 of the 'hist triggers' patchset.

Changes from v8:

Same as v8, but with the RFC patch [ftrace: Add function_hist tracer]
removed, and rebased to latest trace/for-next.

Changes from v7:

This version refactors the commits as suggested by Masami. There are
now more commits, but the result should be much more reviewable. The
ending code is the same as before, modulo a couple minor bug fixes I
discovered while refactoring and testing.

I've also reviewed and fixed a number of shortcomings and errors in
the comments, and have added a new discussion of the tracing_map data
structures after Steve mentioned he found them confusing and/or
insufficiently documented.

Also, I kept Namhyung's string patch [tracing: Support string type key
properly] as submitted, but added a follow-on patch that refactors it
and fixes a problem I found with it that enabled static string keys to
contain random chars and therefore incorrect map insertions.

Changes from v6:

This version adds a new 'sym-offset' modifier as requested by Masami.
I implemented it as a modifier rather than using the trace option as
suggested, in part because I wanted to keep it all self-contained and
it seemed more consistent to just add it alongside the 'sym' modifier.
Also, hist triggers arent't really a tracer and therefore don't
directly tie into the option update/callback mechanism so making use
of it isn't as simple as a normal tracer.

I also changed the sort key specification to be stricter and signal an
error if the specified sort key wasn't found (rather than defaulting
to hitcount in those cases), also suggested by Masami. Thanks,
Masami, for your input!

Also updated the Documentation and tracing/README to reflect the
changes.

Changes from v5:

This version adds support for compound keys, along with the related
ability to sort using primary and secondary keys. This was mentioned
in previous versions as the last important piece that remained
unimplemented, and is now implemented. (I didn't have time to get to
the couple of enhancements suggested by Masami, but I expect to be
able to add those later on top of these.)

Because we now support compound keys and it's not immediately clear in
the output exactly which fields correspond to keys, the key(s),
compound or not, are now enclosed by curly braces.

The Documentation and README have been updated to reflect the changes,
and several new examples have been added to illustrate how to use
compound keys.

Also, the code was updated to work with the new ftrace_event_file,
etc, renaming in tracing/for-next.

Changes from v4:

This version addresses some problems and suggestions made by Daniel
Wagner - a lot of the code was reworked to get rid of the distinction
between keys and values, and as a result, both keys and values can be
used as sort keys. As suggested, it also allows 'val=' to be absent
in a trigger command - if no 'val' is specified, hitcount is assumed
and automatically used as the only val.

The map code was also separated out into a separate file,
tracing_map.c, allowing it to be reused. It also adds a second tracer
called function_hist that actually does reuse the code, as an RFC
patch.

Patch 01/10 [tracing: Update cond flag when enabling or disabling..]
is a fix for a problem noticed by Daniel and that fixes a problem in
existing trigger code and should be applied regardless of whether the
rest of the patchset is merged.

As mentioned, patch 10/10 is an RFC patch implementing a new tracer
based on the function tracer code. It's a fun little tool and is
useful for a specific problem I'm working on (and is also a nice test
of the tracing_map code), but is an RFC because first, I'm not sure it
would really be of general interest and secondly, it's POC-level
quality and I'd need to spend more time fixing it up to make it
upstreamable, but I don't want to waste my time if not.

There are a couple of important bits of functionality that were
present in v1 but not yet reimplemented in v5.

The first is support for compound keys. Currently, maps can only be
keyed on a single event field, whereas in v1 they could be keyed on
multiple keys. With support for compound keys, you can create much
more interesting output, such as for example per-pid lists of
syscalls or read counts e.g.:

# echo 'hist:keys=common_pid.execname,id.syscall:vals=hitcount' > \
/sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger

# cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist

key: common_pid:bash[3112], id:sys_write vals: count:69
key: common_pid:bash[3112], id:sys_rt_sigprocmask vals: count:218

key: common_pid:update-notifier[3164], id:sys_poll vals: count:37
key: common_pid:update-notifier[3164], id:sys_recvfrom vals: count:118

key: common_pid:deja-dup-monito[3194], id:sys_sendto vals: count:1
key: common_pid:deja-dup-monito[3194], id:sys_read vals: count:4
key: common_pid:deja-dup-monito[3194], id:sys_poll vals: count:8
key: common_pid:deja-dup-monito[3194], id:sys_recvmsg vals: count:8
key: common_pid:deja-dup-monito[3194], id:sys_getegid vals: count:8

key: common_pid:emacs[3275], id:sys_fsync vals: count:1
key: common_pid:emacs[3275], id:sys_open vals: count:1
key: common_pid:emacs[3275], id:sys_symlink vals: count:2
key: common_pid:emacs[3275], id:sys_poll vals: count:23
key: common_pid:emacs[3275], id:sys_select vals: count:23
key: common_pid:emacs[3275], id:unknown_syscall vals: count:34
key: common_pid:emacs[3275], id:sys_ioctl vals: count:60
key: common_pid:emacs[3275], id:sys_rt_sigprocmask vals: count:116

key: common_pid:cat[3323], id:sys_munmap vals: count:1
key: common_pid:cat[3323], id:sys_fadvise64 vals: count:1

Related to that is support for sorting on multiple fields. Currently,
you can sort using only a primary key. Being able to sort on multiple
or at least a secondary key is indispensible for seeing trends when
displaying multiple values.

Changes from v3:

v4 fixes the race in tracing_map_insert() noted in v3, where
map.val.key could be checked even if map.val wasn't yet set. The
simple fix for that in tracing_map_insert() introduces the possibility
of duplicates in the map, which though rare, need to be accounted for
in the output. To address that, duplicate-merging code was added to
the map-printing code.

It was also pointed out that it didn't seem correct to include
module.h, but the fix for that has deeper roots and is being addressed
by a separate patchset; for now we need to continue including
module.h, though prompted by that I did some other header include
cleanup.

The functionality remains the same as v2, but this version no longer
tries to export and use bpf_maps, and more importantly removes the
associated GFP_NOTRACE/trace event hacks and kmem macros required to
work around the bpf_map implementation.

The tracing_map functionality is instead built on top of a simple
lock-free map algorithm originated by Dr. Cliff Click (see references
in the code for more details), which though too restrictive to be
general-purpose in its current form, functions nicely as a
special-purpose tracing map.

v3 also moves the hist triggers code into a separate file and puts it
all behind a new config option, CONFIG_HIST_TRIGGERS. It also merges
in the sorting code rather than keeping it as a separate patch.

This patchset also includes a couple other new and related triggers,
enable_hist and disable_hist, very similar to the existing
enable_event/disable_event triggers used to automatically enable and
disable events based on a triggering condition, but in this case
allowing hist triggers to be enabled and disabled in the same way.

- Added an insert check for val before checking the key associated with val
- Added code to merge possible duplicates in the map

Changes from v2:
- reimplemented tracing_map, replacing bpf_map with nmi-safe/lock-free map
- removed GPF_NOTRACE, kmalloc/free macros and event hacks needed by bpf_maps
- moved hist triggers from trace_events_trigger.c to trace_events_hist.c
- added CONFIG_HIST_TRIGGERS config option
- consolidated sorting code with main patch

Changes from v1:
- completely rewritten on top of tracing_map (renamed and exported bpf_map)
- added map clearing and client ops to tracing_map
- changed the name from 'hash' triggers to 'hist' triggers
- added new trigger 'pause' feature
- added new enable_hist and disable_hist triggers
- added usage for hist/enable_hist/disable hist to tracing/README
- moved examples into Documentation/trace/event.txt
- added ___GFP_NOTRACE, kmalloc/kfree macros, and conditional kmem tracepoints

The following changes since commit b44754d8262d3aab842998cf747f44fe6090be9f:

ring_buffer: Allow to exit the ring buffer benchmark immediately (2015-06-15 12:03:12 -0400)

are available in the git repository at:

git://git.yoctoproject.org/linux-yocto-contrib.git tzanussi/hist-triggers-v9
http://git.yoctoproject.org/cgit/cgit.cgi/linux-yocto-contrib/log/?h=tzanussi/hist-triggers-v9

Namhyung Kim (1):
tracing: Support string type key properly

Tom Zanussi (21):
tracing: Update cond flag when enabling or disabling a trigger
tracing: Make ftrace_event_field checking functions available
tracing: Make event trigger functions available
tracing: Add event record param to trigger_ops.func()
tracing: Add get_syscall_name()
tracing: Add a per-event-trigger 'paused' field
tracing: Add lock-free tracing_map
tracing: Add 'hist' event trigger command
tracing: Add hist trigger support for multiple values ('vals=' param)
tracing: Add hist trigger support for compound keys
tracing: Add hist trigger support for user-defined sorting ('sort='
param)
tracing: Add hist trigger support for pausing and continuing a trace
tracing: Add hist trigger support for clearing a trace
tracing: Add hist trigger 'hex' modifier for displaying numeric fields
tracing: Add hist trigger 'sym' and 'sym-offset' modifiers
tracing: Add hist trigger 'execname' modifier
tracing: Add hist trigger 'syscall' modifier
tracing: Add hist trigger support for stacktraces as keys
tracing: Remove restriction on string position in hist trigger keys
tracing: Add enable_hist/disable_hist triggers
tracing: Add 'hist' trigger Documentation

Documentation/trace/events.txt | 1131 +++++++++++++++++++++++++++
include/linux/trace_events.h | 9 +-
kernel/trace/Kconfig | 14 +
kernel/trace/Makefile | 2 +
kernel/trace/trace.c | 66 ++
kernel/trace/trace.h | 77 +-
kernel/trace/trace_events.c | 4 +
kernel/trace/trace_events_filter.c | 12 -
kernel/trace/trace_events_hist.c | 1462 +++++++++++++++++++++++++++++++++++
kernel/trace/trace_events_trigger.c | 149 ++--
kernel/trace/trace_syscalls.c | 11 +
kernel/trace/tracing_map.c | 935 ++++++++++++++++++++++
kernel/trace/tracing_map.h | 258 +++++++
13 files changed, 4046 insertions(+), 84 deletions(-)
create mode 100644 kernel/trace/trace_events_hist.c
create mode 100644 kernel/trace/tracing_map.c
create mode 100644 kernel/trace/tracing_map.h

--
1.9.3


2015-07-16 17:23:08

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 01/22] tracing: Update cond flag when enabling or disabling a trigger

When a trigger is enabled, the cond flag should be set beforehand,
otherwise a trigger that's expecting to process a trace record
(e.g. one with post_trigger set) could be invoked without one.

Likewise a trigger's cond flag should be reset after it's disabled,
not before.

Signed-off-by: Tom Zanussi <[email protected]>
Signed-off-by: Daniel Wagner <[email protected]>
---
kernel/trace/trace_events_trigger.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 42a4009..4d2f3cc 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -543,11 +543,12 @@ static int register_trigger(char *glob, struct event_trigger_ops *ops,
list_add_rcu(&data->list, &file->triggers);
ret++;

+ update_cond_flag(file);
if (trace_event_trigger_enable_disable(file, 1) < 0) {
list_del_rcu(&data->list);
+ update_cond_flag(file);
ret--;
}
- update_cond_flag(file);
out:
return ret;
}
@@ -575,8 +576,8 @@ static void unregister_trigger(char *glob, struct event_trigger_ops *ops,
if (data->cmd_ops->trigger_type == test->cmd_ops->trigger_type) {
unregistered = true;
list_del_rcu(&data->list);
- update_cond_flag(file);
trace_event_trigger_enable_disable(file, 0);
+ update_cond_flag(file);
break;
}
}
@@ -1319,11 +1320,12 @@ static int event_enable_register_trigger(char *glob,
list_add_rcu(&data->list, &file->triggers);
ret++;

+ update_cond_flag(file);
if (trace_event_trigger_enable_disable(file, 1) < 0) {
list_del_rcu(&data->list);
+ update_cond_flag(file);
ret--;
}
- update_cond_flag(file);
out:
return ret;
}
@@ -1344,8 +1346,8 @@ static void event_enable_unregister_trigger(char *glob,
(enable_data->file == test_enable_data->file)) {
unregistered = true;
list_del_rcu(&data->list);
- update_cond_flag(file);
trace_event_trigger_enable_disable(file, 0);
+ update_cond_flag(file);
break;
}
}
--
1.9.3

2015-07-16 17:28:55

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 02/22] tracing: Make ftrace_event_field checking functions available

Make is_string_field() and is_function_field() accessible outside of
trace_event_filters.c for other users of ftrace_event_fields.

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.h | 12 ++++++++++++
kernel/trace/trace_events_filter.c | 12 ------------
2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 4c41fcd..891c5b0 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1050,6 +1050,18 @@ struct filter_pred {
unsigned short right;
};

+static inline bool is_string_field(struct ftrace_event_field *field)
+{
+ return field->filter_type == FILTER_DYN_STRING ||
+ field->filter_type == FILTER_STATIC_STRING ||
+ field->filter_type == FILTER_PTR_STRING;
+}
+
+static inline bool is_function_field(struct ftrace_event_field *field)
+{
+ return field->filter_type == FILTER_TRACE_FN;
+}
+
extern enum regex_type
filter_parse_regex(char *buff, int len, char **search, int *not);
extern void print_event_filter(struct trace_event_file *file,
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 71511eb..245ee5d 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -917,18 +917,6 @@ int filter_assign_type(const char *type)
return FILTER_OTHER;
}

-static bool is_function_field(struct ftrace_event_field *field)
-{
- return field->filter_type == FILTER_TRACE_FN;
-}
-
-static bool is_string_field(struct ftrace_event_field *field)
-{
- return field->filter_type == FILTER_DYN_STRING ||
- field->filter_type == FILTER_STATIC_STRING ||
- field->filter_type == FILTER_PTR_STRING;
-}
-
static int is_legal_op(struct ftrace_event_field *field, int op)
{
if (is_string_field(field) &&
--
1.9.3

2015-07-16 17:23:15

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 03/22] tracing: Make event trigger functions available

Make various event trigger utility functions available outside of
trace_events_trigger.c so that new triggers can be defined outside of
that file.

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.h | 14 ++++++++++++++
kernel/trace/trace_events_trigger.c | 28 +++++++++++++---------------
2 files changed, 27 insertions(+), 15 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 891c5b0..4ff33b7 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1113,6 +1113,20 @@ struct event_trigger_data {
struct list_head list;
};

+extern void trigger_data_free(struct event_trigger_data *data);
+extern int event_trigger_init(struct event_trigger_ops *ops,
+ struct event_trigger_data *data);
+extern int trace_event_trigger_enable_disable(struct trace_event_file *file,
+ int trigger_enable);
+extern void update_cond_flag(struct trace_event_file *file);
+extern void unregister_trigger(char *glob, struct event_trigger_ops *ops,
+ struct event_trigger_data *test,
+ struct trace_event_file *file);
+extern int set_trigger_filter(char *filter_str,
+ struct event_trigger_data *trigger_data,
+ struct trace_event_file *file);
+extern int register_event_command(struct event_command *cmd);
+
/**
* struct event_trigger_ops - callbacks for trace event triggers
*
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 4d2f3cc..6087052 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -28,8 +28,7 @@
static LIST_HEAD(trigger_commands);
static DEFINE_MUTEX(trigger_cmd_mutex);

-static void
-trigger_data_free(struct event_trigger_data *data)
+void trigger_data_free(struct event_trigger_data *data)
{
if (data->cmd_ops->set_filter)
data->cmd_ops->set_filter(NULL, data, NULL);
@@ -311,7 +310,7 @@ const struct file_operations event_trigger_fops = {
* Currently we only register event commands from __init, so mark this
* __init too.
*/
-static __init int register_event_command(struct event_command *cmd)
+__init int register_event_command(struct event_command *cmd)
{
struct event_command *p;
int ret = 0;
@@ -400,9 +399,8 @@ event_trigger_print(const char *name, struct seq_file *m,
*
* Return: 0 on success, errno otherwise
*/
-static int
-event_trigger_init(struct event_trigger_ops *ops,
- struct event_trigger_data *data)
+int event_trigger_init(struct event_trigger_ops *ops,
+ struct event_trigger_data *data)
{
data->ref++;
return 0;
@@ -430,8 +428,8 @@ event_trigger_free(struct event_trigger_ops *ops,
trigger_data_free(data);
}

-static int trace_event_trigger_enable_disable(struct trace_event_file *file,
- int trigger_enable)
+int trace_event_trigger_enable_disable(struct trace_event_file *file,
+ int trigger_enable)
{
int ret = 0;

@@ -488,7 +486,7 @@ clear_event_triggers(struct trace_array *tr)
* its TRIGGER_COND bit set, otherwise the TRIGGER_COND bit should be
* cleared.
*/
-static void update_cond_flag(struct trace_event_file *file)
+void update_cond_flag(struct trace_event_file *file)
{
struct event_trigger_data *data;
bool set_cond = false;
@@ -565,9 +563,9 @@ out:
* Usually used directly as the @unreg method in event command
* implementations.
*/
-static void unregister_trigger(char *glob, struct event_trigger_ops *ops,
- struct event_trigger_data *test,
- struct trace_event_file *file)
+void unregister_trigger(char *glob, struct event_trigger_ops *ops,
+ struct event_trigger_data *test,
+ struct trace_event_file *file)
{
struct event_trigger_data *data;
bool unregistered = false;
@@ -701,9 +699,9 @@ event_trigger_callback(struct event_command *cmd_ops,
*
* Return: 0 on success, errno otherwise
*/
-static int set_trigger_filter(char *filter_str,
- struct event_trigger_data *trigger_data,
- struct trace_event_file *file)
+int set_trigger_filter(char *filter_str,
+ struct event_trigger_data *trigger_data,
+ struct trace_event_file *file)
{
struct event_trigger_data *data = trigger_data;
struct event_filter *filter = NULL, *tmp;
--
1.9.3

2015-07-16 17:23:13

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 04/22] tracing: Add event record param to trigger_ops.func()

Some triggers may need access to the trace event, so pass it in. Also
fix up the existing trigger funcs and their callers.

Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/trace_events.h | 7 ++++---
kernel/trace/trace.h | 6 ++++--
kernel/trace/trace_events_trigger.c | 35 ++++++++++++++++++-----------------
3 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 1063c85..d9b0f89 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -423,7 +423,8 @@ extern int call_filter_check_discard(struct trace_event_call *call, void *rec,
extern enum event_trigger_type event_triggers_call(struct trace_event_file *file,
void *rec);
extern void event_triggers_post_call(struct trace_event_file *file,
- enum event_trigger_type tt);
+ enum event_trigger_type tt,
+ void *rec);

/**
* trace_trigger_soft_disabled - do triggers and test if soft disabled
@@ -506,7 +507,7 @@ event_trigger_unlock_commit(struct trace_event_file *file,
trace_buffer_unlock_commit(buffer, event, irq_flags, pc);

if (tt)
- event_triggers_post_call(file, tt);
+ event_triggers_post_call(file, tt, entry);
}

/**
@@ -539,7 +540,7 @@ event_trigger_unlock_commit_regs(struct trace_event_file *file,
irq_flags, pc, regs);

if (tt)
- event_triggers_post_call(file, tt);
+ event_triggers_post_call(file, tt, entry);
}

#ifdef CONFIG_BPF_SYSCALL
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 4ff33b7..8799348 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1139,7 +1139,8 @@ extern int register_event_command(struct event_command *cmd);
* @func: The trigger 'probe' function called when the triggering
* event occurs. The data passed into this callback is the data
* that was supplied to the event_command @reg() function that
- * registered the trigger (see struct event_command).
+ * registered the trigger (see struct event_command) along with
+ * the trace record, rec.
*
* @init: An optional initialization function called for the trigger
* when the trigger is registered (via the event_command reg()
@@ -1164,7 +1165,8 @@ extern int register_event_command(struct event_command *cmd);
* (see trace_event_triggers.c).
*/
struct event_trigger_ops {
- void (*func)(struct event_trigger_data *data);
+ void (*func)(struct event_trigger_data *data,
+ void *rec);
int (*init)(struct event_trigger_ops *ops,
struct event_trigger_data *data);
void (*free)(struct event_trigger_ops *ops,
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 6087052..e30539c 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -73,7 +73,7 @@ event_triggers_call(struct trace_event_file *file, void *rec)

list_for_each_entry_rcu(data, &file->triggers, list) {
if (!rec) {
- data->ops->func(data);
+ data->ops->func(data, rec);
continue;
}
filter = rcu_dereference_sched(data->filter);
@@ -83,7 +83,7 @@ event_triggers_call(struct trace_event_file *file, void *rec)
tt |= data->cmd_ops->trigger_type;
continue;
}
- data->ops->func(data);
+ data->ops->func(data, rec);
}
return tt;
}
@@ -103,13 +103,14 @@ EXPORT_SYMBOL_GPL(event_triggers_call);
*/
void
event_triggers_post_call(struct trace_event_file *file,
- enum event_trigger_type tt)
+ enum event_trigger_type tt,
+ void *rec)
{
struct event_trigger_data *data;

list_for_each_entry_rcu(data, &file->triggers, list) {
if (data->cmd_ops->trigger_type & tt)
- data->ops->func(data);
+ data->ops->func(data, rec);
}
}
EXPORT_SYMBOL_GPL(event_triggers_post_call);
@@ -750,7 +751,7 @@ int set_trigger_filter(char *filter_str,
}

static void
-traceon_trigger(struct event_trigger_data *data)
+traceon_trigger(struct event_trigger_data *data, void *rec)
{
if (tracing_is_on())
return;
@@ -759,7 +760,7 @@ traceon_trigger(struct event_trigger_data *data)
}

static void
-traceon_count_trigger(struct event_trigger_data *data)
+traceon_count_trigger(struct event_trigger_data *data, void *rec)
{
if (tracing_is_on())
return;
@@ -774,7 +775,7 @@ traceon_count_trigger(struct event_trigger_data *data)
}

static void
-traceoff_trigger(struct event_trigger_data *data)
+traceoff_trigger(struct event_trigger_data *data, void *rec)
{
if (!tracing_is_on())
return;
@@ -783,7 +784,7 @@ traceoff_trigger(struct event_trigger_data *data)
}

static void
-traceoff_count_trigger(struct event_trigger_data *data)
+traceoff_count_trigger(struct event_trigger_data *data, void *rec)
{
if (!tracing_is_on())
return;
@@ -879,13 +880,13 @@ static struct event_command trigger_traceoff_cmd = {

#ifdef CONFIG_TRACER_SNAPSHOT
static void
-snapshot_trigger(struct event_trigger_data *data)
+snapshot_trigger(struct event_trigger_data *data, void *rec)
{
tracing_snapshot();
}

static void
-snapshot_count_trigger(struct event_trigger_data *data)
+snapshot_count_trigger(struct event_trigger_data *data, void *rec)
{
if (!data->count)
return;
@@ -893,7 +894,7 @@ snapshot_count_trigger(struct event_trigger_data *data)
if (data->count != -1)
(data->count)--;

- snapshot_trigger(data);
+ snapshot_trigger(data, rec);
}

static int
@@ -972,13 +973,13 @@ static __init int register_trigger_snapshot_cmd(void) { return 0; }
#define STACK_SKIP 3

static void
-stacktrace_trigger(struct event_trigger_data *data)
+stacktrace_trigger(struct event_trigger_data *data, void *rec)
{
trace_dump_stack(STACK_SKIP);
}

static void
-stacktrace_count_trigger(struct event_trigger_data *data)
+stacktrace_count_trigger(struct event_trigger_data *data, void *rec)
{
if (!data->count)
return;
@@ -986,7 +987,7 @@ stacktrace_count_trigger(struct event_trigger_data *data)
if (data->count != -1)
(data->count)--;

- stacktrace_trigger(data);
+ stacktrace_trigger(data, rec);
}

static int
@@ -1057,7 +1058,7 @@ struct enable_trigger_data {
};

static void
-event_enable_trigger(struct event_trigger_data *data)
+event_enable_trigger(struct event_trigger_data *data, void *rec)
{
struct enable_trigger_data *enable_data = data->private_data;

@@ -1068,7 +1069,7 @@ event_enable_trigger(struct event_trigger_data *data)
}

static void
-event_enable_count_trigger(struct event_trigger_data *data)
+event_enable_count_trigger(struct event_trigger_data *data, void *rec)
{
struct enable_trigger_data *enable_data = data->private_data;

@@ -1082,7 +1083,7 @@ event_enable_count_trigger(struct event_trigger_data *data)
if (data->count != -1)
(data->count)--;

- event_enable_trigger(data);
+ event_enable_trigger(data, rec);
}

static int
--
1.9.3

2015-07-16 17:23:27

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 05/22] tracing: Add get_syscall_name()

Add a utility function to grab the syscall name from the syscall
metadata, given a syscall id.

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.h | 5 +++++
kernel/trace/trace_syscalls.c | 11 +++++++++++
2 files changed, 16 insertions(+)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 8799348..6fe5b66 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1331,8 +1331,13 @@ int perf_ftrace_event_register(struct trace_event_call *call,

#ifdef CONFIG_FTRACE_SYSCALLS
void init_ftrace_syscalls(void);
+const char *get_syscall_name(int syscall);
#else
static inline void init_ftrace_syscalls(void) { }
+static inline const char *get_syscall_name(int syscall)
+{
+ return NULL;
+}
#endif

#ifdef CONFIG_EVENT_TRACING
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 7d567a4..004c111 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -106,6 +106,17 @@ static struct syscall_metadata *syscall_nr_to_meta(int nr)
return syscalls_metadata[nr];
}

+const char *get_syscall_name(int syscall)
+{
+ struct syscall_metadata *entry;
+
+ entry = syscall_nr_to_meta(syscall);
+ if (!entry)
+ return NULL;
+
+ return entry->name;
+}
+
static enum print_line_t
print_syscall_enter(struct trace_iterator *iter, int flags,
struct trace_event *event)
--
1.9.3

2015-07-16 17:23:25

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 06/22] tracing: Add a per-event-trigger 'paused' field

Add a simple per-trigger 'paused' flag, allowing individual triggers
to pause. We could leave it to individual triggers that need this
functionality to do it themselves, but we also want to allow other
events to control pausing, so add it to the trigger data.

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.h | 1 +
kernel/trace/trace_events_trigger.c | 4 ++++
2 files changed, 5 insertions(+)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 6fe5b66..5e675b2 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1110,6 +1110,7 @@ struct event_trigger_data {
struct event_filter __rcu *filter;
char *filter_str;
void *private_data;
+ bool paused;
struct list_head list;
};

diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index e30539c..5f632ff 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -72,6 +72,8 @@ event_triggers_call(struct trace_event_file *file, void *rec)
return tt;

list_for_each_entry_rcu(data, &file->triggers, list) {
+ if (data->paused)
+ continue;
if (!rec) {
data->ops->func(data, rec);
continue;
@@ -109,6 +111,8 @@ event_triggers_post_call(struct trace_event_file *file,
struct event_trigger_data *data;

list_for_each_entry_rcu(data, &file->triggers, list) {
+ if (data->paused)
+ continue;
if (data->cmd_ops->trigger_type & tt)
data->ops->func(data, rec);
}
--
1.9.3

2015-07-16 17:28:02

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 07/22] tracing: Add lock-free tracing_map

Add tracing_map, a special-purpose lock-free map for tracing.

tracing_map is designed to aggregate or 'sum' one or more values
associated with a specific object of type tracing_map_elt, which
is associated by the map to a given key.

It provides various hooks allowing per-tracer customization and is
separated out into a separate file in order to allow it to be shared
between multiple tracers, but isn't meant to be generally used outside
of that context.

The tracing_map implementation was inspired by lock-free map
algorithms originated by Dr. Cliff Click:

http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/Makefile | 1 +
kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
kernel/trace/tracing_map.h | 258 +++++++++++++
3 files changed, 1194 insertions(+)
create mode 100644 kernel/trace/tracing_map.c
create mode 100644 kernel/trace/tracing_map.h

diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index 9b1044e..3b26cfb 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
obj-$(CONFIG_TRACING) += trace_seq.o
obj-$(CONFIG_TRACING) += trace_stat.o
obj-$(CONFIG_TRACING) += trace_printk.o
+obj-$(CONFIG_TRACING) += tracing_map.o
obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
new file mode 100644
index 0000000..a505025
--- /dev/null
+++ b/kernel/trace/tracing_map.c
@@ -0,0 +1,935 @@
+/*
+ * tracing_map - lock-free map for tracing
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2015 Tom Zanussi <[email protected]>
+ *
+ * tracing_map implementation inspired by lock-free map algorithms
+ * originated by Dr. Cliff Click:
+ *
+ * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
+ * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
+ */
+
+#include <linux/slab.h>
+#include <linux/jhash.h>
+#include <linux/sort.h>
+
+#include "tracing_map.h"
+#include "trace.h"
+
+/*
+ * NOTE: For a detailed description of the data structures used by
+ * these functions (such as tracing_map_elt) please see the overview
+ * of tracing_map data structures at the beginning of tracing_map.h.
+ */
+
+/**
+ * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
+ * @elt: The tracing_map_elt
+ * @i: The index of the given sum associated with the tracing_map_elt
+ * @n: The value to add to the sum
+ *
+ * Add n to sum i associated with the specified tracing_map_elt
+ * instance. The index i is the index returned by the call to
+ * tracing_map_add_sum_field() when the tracing map was set up.
+ */
+void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
+{
+ atomic64_add(n, &elt->fields[i].sum);
+}
+
+/**
+ * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
+ * @elt: The tracing_map_elt
+ * @i: The index of the given sum associated with the tracing_map_elt
+ *
+ * Retrieve the value of the sum i associated with the specified
+ * tracing_map_elt instance. The index i is the index returned by the
+ * call to tracing_map_add_sum_field() when the tracing map was set
+ * up.
+ *
+ * Return: The sum associated with field i for elt.
+ */
+u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
+{
+ return (u64)atomic64_read(&elt->fields[i].sum);
+}
+
+int tracing_map_cmp_string(void *val_a, void *val_b)
+{
+ char *a = val_a;
+ char *b = val_b;
+
+ return strcmp(a, b);
+}
+
+int tracing_map_cmp_none(void *val_a, void *val_b)
+{
+ return 0;
+}
+
+static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
+{
+ u64 a = atomic64_read((atomic64_t *)val_a);
+ u64 b = atomic64_read((atomic64_t *)val_b);
+
+ return (a > b) ? 1 : ((a < b) ? -1 : 0);
+}
+
+#define DEFINE_TRACING_MAP_CMP_FN(type) \
+static int tracing_map_cmp_##type(void *val_a, void *val_b) \
+{ \
+ type a = *(type *)val_a; \
+ type b = *(type *)val_b; \
+ \
+ return (a > b) ? 1 : ((a < b) ? -1 : 0); \
+}
+
+DEFINE_TRACING_MAP_CMP_FN(s64);
+DEFINE_TRACING_MAP_CMP_FN(u64);
+DEFINE_TRACING_MAP_CMP_FN(s32);
+DEFINE_TRACING_MAP_CMP_FN(u32);
+DEFINE_TRACING_MAP_CMP_FN(s16);
+DEFINE_TRACING_MAP_CMP_FN(u16);
+DEFINE_TRACING_MAP_CMP_FN(s8);
+DEFINE_TRACING_MAP_CMP_FN(u8);
+
+tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
+ int field_is_signed)
+{
+ tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
+
+ switch (field_size) {
+ case 8:
+ if (field_is_signed)
+ fn = tracing_map_cmp_s64;
+ else
+ fn = tracing_map_cmp_u64;
+ break;
+ case 4:
+ if (field_is_signed)
+ fn = tracing_map_cmp_s32;
+ else
+ fn = tracing_map_cmp_u32;
+ break;
+ case 2:
+ if (field_is_signed)
+ fn = tracing_map_cmp_s16;
+ else
+ fn = tracing_map_cmp_u16;
+ break;
+ case 1:
+ if (field_is_signed)
+ fn = tracing_map_cmp_s8;
+ else
+ fn = tracing_map_cmp_u8;
+ break;
+ }
+
+ return fn;
+}
+
+static int tracing_map_add_field(struct tracing_map *map,
+ tracing_map_cmp_fn_t cmp_fn)
+{
+ int ret = -EINVAL;
+
+ if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
+ ret = map->n_fields;
+ map->fields[map->n_fields++].cmp_fn = cmp_fn;
+ }
+
+ return ret;
+}
+
+/**
+ * tracing_map_add_sum_field - Add a field describing a tracing_map sum
+ * @map: The tracing_map
+ *
+ * Add a sum field to the key and return the index identifying it in
+ * the map and associated tracing_map_elts. This is the index used
+ * for instance to update a sum for a particular tracing_map_elt using
+ * tracing_map_update_sum() or reading it via tracing_map_read_sum().
+ *
+ * Return: The index identifying the field in the map and associated
+ * tracing_map_elts.
+ */
+int tracing_map_add_sum_field(struct tracing_map *map)
+{
+ return tracing_map_add_field(map, tracing_map_cmp_atomic64);
+}
+
+/**
+ * tracing_map_add_key_field - Add a field describing a tracing_map key
+ * @map: The tracing_map
+ * @offset: The offset within the key
+ * @cmp_fn: The comparison function that will be used to sort on the key
+ *
+ * Let the map know there is a key and that if it's used as a sort key
+ * to use cmp_fn.
+ *
+ * A key can be a subset of a compound key; for that purpose, the
+ * offset param is used to describe where within the the compound key
+ * the key referenced by this key field resides.
+ *
+ * Return: The index identifying the field in the map and associated
+ * tracing_map_elts.
+ */
+int tracing_map_add_key_field(struct tracing_map *map,
+ unsigned int offset,
+ tracing_map_cmp_fn_t cmp_fn)
+
+{
+ int idx = tracing_map_add_field(map, cmp_fn);
+
+ if (idx < 0)
+ return idx;
+
+ map->fields[idx].offset = offset;
+
+ map->key_idx[map->n_keys++] = idx;
+
+ return idx;
+}
+
+static void tracing_map_elt_clear(struct tracing_map_elt *elt)
+{
+ unsigned i;
+
+ for (i = 0; i < elt->map->n_fields; i++)
+ if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
+ atomic64_set(&elt->fields[i].sum, 0);
+
+ if (elt->map->ops && elt->map->ops->elt_clear)
+ elt->map->ops->elt_clear(elt);
+}
+
+static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
+{
+ unsigned int i;
+
+ tracing_map_elt_clear(elt);
+
+ for (i = 0; i < elt->map->n_fields; i++) {
+ elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
+
+ if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
+ elt->fields[i].offset = elt->map->fields[i].offset;
+ }
+}
+
+static void tracing_map_elt_free(struct tracing_map_elt *elt)
+{
+ if (!elt)
+ return;
+
+ if (elt->map->ops && elt->map->ops->elt_free)
+ elt->map->ops->elt_free(elt);
+ kfree(elt->fields);
+ kfree(elt->key);
+ kfree(elt);
+}
+
+static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
+{
+ struct tracing_map_elt *elt;
+ int err = 0;
+
+ elt = kzalloc(sizeof(*elt), GFP_KERNEL);
+ if (!elt)
+ return ERR_PTR(-ENOMEM);
+
+ elt->map = map;
+
+ elt->key = kzalloc(map->key_size, GFP_KERNEL);
+ if (!elt->key) {
+ err = -ENOMEM;
+ goto free;
+ }
+
+ elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
+ if (!elt->fields) {
+ err = -ENOMEM;
+ goto free;
+ }
+
+ tracing_map_elt_init_fields(elt);
+
+ if (map->ops && map->ops->elt_alloc) {
+ err = map->ops->elt_alloc(elt);
+ if (err)
+ goto free;
+ }
+ return elt;
+ free:
+ tracing_map_elt_free(elt);
+
+ return ERR_PTR(err);
+}
+
+static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
+{
+ struct tracing_map_elt *elt = NULL;
+ int idx;
+
+ idx = atomic_inc_return(&map->next_elt);
+ if (idx < map->max_elts) {
+ elt = map->elts[idx];
+ if (map->ops && map->ops->elt_init)
+ map->ops->elt_init(elt);
+ }
+
+ return elt;
+}
+
+static void tracing_map_free_elts(struct tracing_map *map)
+{
+ unsigned int i;
+
+ if (!map->elts)
+ return;
+
+ for (i = 0; i < map->max_elts; i++)
+ tracing_map_elt_free(map->elts[i]);
+
+ kfree(map->elts);
+}
+
+static int tracing_map_alloc_elts(struct tracing_map *map)
+{
+ unsigned int i;
+
+ map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
+ GFP_KERNEL);
+ if (!map->elts)
+ return -ENOMEM;
+
+ for (i = 0; i < map->max_elts; i++) {
+ map->elts[i] = tracing_map_elt_alloc(map);
+ if (!map->elts[i]) {
+ tracing_map_free_elts(map);
+
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+static inline bool keys_match(void *key, void *test_key, unsigned key_size)
+{
+ bool match = true;
+
+ if (memcmp(key, test_key, key_size))
+ match = false;
+
+ return match;
+}
+
+/**
+ * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
+ * @map: The tracing_map to insert into
+ * @key: The key to insert
+ *
+ * Inserts a key into a tracing_map and creates and returns a new
+ * tracing_map_elt for it, or if the key has already been inserted by
+ * a previous call, returns the tracing_map_elt already associated
+ * with it. When the map was created, the number of elements to be
+ * allocated for the map was specified (internally maintained as
+ * 'max_elts' in struct tracing_map), and that number of
+ * tracing_map_elts was created by tracing_map_init(). This is the
+ * pre-allocated pool of tracing_map_elts that tracing_map_insert()
+ * will allocate from when adding new keys. Once that pool is
+ * exhausted, tracing_map_insert() is useless and will return NULL to
+ * signal that state.
+ *
+ * This is a lock-free tracing map insertion function implementing a
+ * modified form of Cliff Click's basic insertion algorithm. It
+ * requires the table size be a power of two. To prevent any
+ * possibility of an infinite loop we always make the internal table
+ * size double the size of the requested table size (max_elts * 2).
+ * Likewise, we never reuse a slot or resize or delete elements - when
+ * we've reached max_elts entries, we simply return NULL once we've
+ * run out of entries. Readers can at any point in time traverse the
+ * tracing map and safely access the key/val pairs.
+ *
+ * Return: the tracing_map_elt pointer val associated with the key.
+ * If this was a newly inserted key, the val will be a newly allocated
+ * and associated tracing_map_elt pointer val. If the key wasn't
+ * found and the pool of tracing_map_elts has been exhausted, NULL is
+ * returned and no further insertions will succeed.
+ */
+struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
+{
+ u32 idx, key_hash, test_key;
+
+ key_hash = jhash(key, map->key_size, 0);
+ idx = key_hash >> (32 - (map->map_bits + 1));
+
+ while (1) {
+ idx &= (map->map_size - 1);
+ test_key = map->map[idx].key;
+
+ if (test_key && test_key == key_hash && map->map[idx].val &&
+ keys_match(key, map->map[idx].val->key, map->key_size))
+ return map->map[idx].val;
+
+ if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
+ struct tracing_map_elt *elt;
+
+ elt = get_free_elt(map);
+ if (!elt)
+ break;
+ memcpy(elt->key, key, map->key_size);
+ map->map[idx].val = elt;
+
+ return map->map[idx].val;
+ }
+ idx++;
+ }
+
+ return NULL;
+}
+
+/**
+ * tracing_map_destroy - Destroy a tracing_map
+ * @map: The tracing_map to destroy
+ *
+ * Frees a tracing_map along with its associated array of
+ * tracing_map_elts.
+ *
+ * Callers should make sure there are no readers or writers actively
+ * reading or inserting into the map before calling this.
+ */
+void tracing_map_destroy(struct tracing_map *map)
+{
+ if (!map)
+ return;
+
+ tracing_map_free_elts(map);
+
+ kfree(map->map);
+ kfree(map);
+}
+
+/**
+ * tracing_map_clear - Clear a tracing_map
+ * @map: The tracing_map to clear
+ *
+ * Resets the tracing map to a cleared or initial state. The
+ * tracing_map_elts are all cleared, and the array of struct
+ * tracing_map_entry is reset to an initialized state.
+ *
+ * Callers should make sure there are no writers actively inserting
+ * into the map before calling this.
+ */
+void tracing_map_clear(struct tracing_map *map)
+{
+ unsigned int i, size;
+
+ atomic_set(&map->next_elt, -1);
+
+ size = map->map_size * sizeof(struct tracing_map_entry);
+ memset(map->map, 0, size);
+
+ for (i = 0; i < map->max_elts; i++)
+ tracing_map_elt_clear(map->elts[i]);
+}
+
+static void set_sort_key(struct tracing_map *map,
+ struct tracing_map_sort_key *sort_key)
+{
+ map->sort_key = *sort_key;
+}
+
+/**
+ * tracing_map_create - Create a lock-free map and element pool
+ * @map_bits: The size of the map (2 ** map_bits)
+ * @key_size: The size of the key for the map in bytes
+ * @ops: Optional client-defined tracing_map_ops instance
+ * @private_data: Client data associated with the map
+ *
+ * Creates and sets up a map to contain 2 ** map_bits number of
+ * elements (internally maintained as 'max_elts' in struct
+ * tracing_map). Before using, map fields should be added to the map
+ * with tracing_map_add_sum_field() and tracing_map_add_key_field().
+ * tracing_map_init() should then be called to allocate the array of
+ * tracing_map_elts, in order to avoid allocating anything in the map
+ * insertion path. The user-specified map size reflects the maximum
+ * number of elements that can be contained in the table requested by
+ * the user - internally we double that in order to keep the table
+ * sparse and keep collisions manageable.
+ *
+ * A tracing_map is a special-purpose map designed to aggregate or
+ * 'sum' one or more values associated with a specific object of type
+ * tracing_map_elt, which is attached by the map to a given key.
+ *
+ * tracing_map_create() sets up the map itself, and provides
+ * operations for inserting tracing_map_elts, but doesn't allocate the
+ * tracing_map_elts themselves, or provide a means for describing the
+ * keys or sums associated with the tracing_map_elts. All
+ * tracing_map_elts for a given map have the same set of sums and
+ * keys, which are defined by the client using the functions
+ * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
+ * the fields are defined, the pool of elements allocated for the map
+ * can be created, which occurs when the client code calls
+ * tracing_map_init().
+ *
+ * When tracing_map_init() returns, tracing_map_elt elements can be
+ * inserted into the map using tracing_map_insert(). When called,
+ * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
+ * finds an existing match in the map and in either case returns it.
+ * The client can then use tracing_map_update_sum() and
+ * tracing_map_read_sum() to update or read a given sum field for the
+ * tracing_map_elt.
+ *
+ * The client can at any point retrieve and traverse the current set
+ * of inserted tracing_map_elts in a tracing_map, via
+ * tracing_map_sort_entries(). Sorting can be done on any field,
+ * including keys.
+ *
+ * See tracing_map.h for a description of tracing_map_ops.
+ *
+ * Return: the tracing_map pointer if successful, ERR_PTR if not.
+ */
+struct tracing_map *tracing_map_create(unsigned int map_bits,
+ unsigned int key_size,
+ struct tracing_map_ops *ops,
+ void *private_data)
+{
+ struct tracing_map *map;
+ unsigned int i;
+
+ if (map_bits < TRACING_MAP_BITS_MIN ||
+ map_bits > TRACING_MAP_BITS_MAX)
+ return ERR_PTR(-EINVAL);
+
+ map = kzalloc(sizeof(*map), GFP_KERNEL);
+ if (!map)
+ return ERR_PTR(-ENOMEM);
+
+ map->map_bits = map_bits;
+ map->max_elts = (1 << map_bits);
+ atomic_set(&map->next_elt, -1);
+
+ map->map_size = (1 << (map_bits + 1));
+ map->ops = ops;
+
+ map->private_data = private_data;
+
+ map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
+ GFP_KERNEL);
+ if (!map->map)
+ goto free;
+
+ map->key_size = key_size;
+ for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
+ map->key_idx[i] = -1;
+ out:
+ return map;
+ free:
+ tracing_map_destroy(map);
+ map = ERR_PTR(-ENOMEM);
+
+ goto out;
+}
+
+/**
+ * tracing_map_init - Allocate and clear a map's tracing_map_elts
+ * @map: The tracing_map to initialize
+ *
+ * Allocates a clears a pool of tracing_map_elts equal to the
+ * user-specified size of 2 ** map_bits (internally maintained as
+ * 'max_elts' in struct tracing_map). Before using, the map fields
+ * should be added to the map with tracing_map_add_sum_field() and
+ * tracing_map_add_key_field(). tracing_map_init() should then be
+ * called to allocate the array of tracing_map_elts, in order to avoid
+ * allocating anything in the map insertion path. The user-specified
+ * map size reflects the max number of elements requested by the user
+ * - internally we double that in order to keep the table sparse and
+ * keep collisions manageable.
+ *
+ * See tracing_map.h for a description of tracing_map_ops.
+ *
+ * Return: the tracing_map pointer if successful, ERR_PTR if not.
+ */
+int tracing_map_init(struct tracing_map *map)
+{
+ int err;
+
+ if (map->n_fields < 2)
+ return -EINVAL; /* need at least 1 key and 1 val */
+
+ err = tracing_map_alloc_elts(map);
+ if (err)
+ return err;
+
+ tracing_map_clear(map);
+
+ return err;
+}
+
+static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
+ const struct tracing_map_sort_entry **b)
+{
+ int ret = 0;
+
+ if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
+ ret = 1;
+
+ return ret;
+}
+
+static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
+ const struct tracing_map_sort_entry **b)
+{
+ const struct tracing_map_elt *elt_a, *elt_b;
+ struct tracing_map_sort_key *sort_key;
+ struct tracing_map_field *field;
+ tracing_map_cmp_fn_t cmp_fn;
+ void *val_a, *val_b;
+ int ret = 0;
+
+ elt_a = (*a)->elt;
+ elt_b = (*b)->elt;
+
+ sort_key = &elt_a->map->sort_key;
+
+ field = &elt_a->fields[sort_key->field_idx];
+ cmp_fn = field->cmp_fn;
+
+ val_a = &elt_a->fields[sort_key->field_idx].sum;
+ val_b = &elt_b->fields[sort_key->field_idx].sum;
+
+ ret = cmp_fn(val_a, val_b);
+ if (sort_key->descending)
+ ret = -ret;
+
+ return ret;
+}
+
+static int cmp_entries_key(const struct tracing_map_sort_entry **a,
+ const struct tracing_map_sort_entry **b)
+{
+ const struct tracing_map_elt *elt_a, *elt_b;
+ struct tracing_map_sort_key *sort_key;
+ struct tracing_map_field *field;
+ tracing_map_cmp_fn_t cmp_fn;
+ void *val_a, *val_b;
+ int ret = 0;
+
+ elt_a = (*a)->elt;
+ elt_b = (*b)->elt;
+
+ sort_key = &elt_a->map->sort_key;
+
+ field = &elt_a->fields[sort_key->field_idx];
+
+ cmp_fn = field->cmp_fn;
+
+ val_a = elt_a->key + field->offset;
+ val_b = elt_b->key + field->offset;
+
+ ret = cmp_fn(val_a, val_b);
+ if (sort_key->descending)
+ ret = -ret;
+
+ return ret;
+}
+
+static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
+{
+ if (!entry)
+ return;
+
+ if (entry->elt_copied)
+ tracing_map_elt_free(entry->elt);
+
+ kfree(entry);
+}
+
+/**
+ * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
+ * @entries: The entries to destroy
+ * @n_entries: The number of entries in the array
+ *
+ * Destroy the elements returned by a tracing_map_sort_entries() call.
+ */
+void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
+ unsigned int n_entries)
+{
+ unsigned int i;
+
+ for (i = 0; i < n_entries; i++)
+ destroy_sort_entry(entries[i]);
+}
+
+static struct tracing_map_sort_entry *
+create_sort_entry(void *key, struct tracing_map_elt *elt)
+{
+ struct tracing_map_sort_entry *sort_entry;
+
+ sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
+ if (!sort_entry)
+ return NULL;
+
+ sort_entry->key = key;
+ sort_entry->elt = elt;
+
+ return sort_entry;
+}
+
+static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
+{
+ struct tracing_map_elt *dup_elt;
+ unsigned int i;
+
+ dup_elt = tracing_map_elt_alloc(elt->map);
+ if (!dup_elt)
+ return NULL;
+
+ if (elt->map->ops && elt->map->ops->elt_copy)
+ elt->map->ops->elt_copy(dup_elt, elt);
+
+ dup_elt->private_data = elt->private_data;
+ memcpy(dup_elt->key, elt->key, elt->map->key_size);
+
+ for (i = 0; i < elt->map->n_fields; i++) {
+ atomic64_set(&dup_elt->fields[i].sum,
+ atomic64_read(&elt->fields[i].sum));
+ dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
+ }
+
+ return dup_elt;
+}
+
+static int merge_dup(struct tracing_map_sort_entry **sort_entries,
+ unsigned int target, unsigned int dup)
+{
+ struct tracing_map_elt *target_elt, *elt;
+ bool first_dup = (target - dup) == 1;
+ int i;
+
+ if (first_dup) {
+ elt = sort_entries[target]->elt;
+ target_elt = copy_elt(elt);
+ if (!target_elt)
+ return -ENOMEM;
+ sort_entries[target]->elt = target_elt;
+ sort_entries[target]->elt_copied = true;
+ } else
+ target_elt = sort_entries[target]->elt;
+
+ elt = sort_entries[dup]->elt;
+
+ for (i = 0; i < elt->map->n_fields; i++)
+ atomic64_add(atomic64_read(&elt->fields[i].sum),
+ &target_elt->fields[i].sum);
+
+ sort_entries[dup]->dup = true;
+
+ return 0;
+}
+
+static int merge_dups(struct tracing_map_sort_entry **sort_entries,
+ int n_entries, unsigned int key_size)
+{
+ unsigned int dups = 0, total_dups = 0;
+ int err, i, j;
+ void *key;
+
+ if (n_entries < 2)
+ return total_dups;
+
+ sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
+ (int (*)(const void *, const void *))cmp_entries_dup, NULL);
+
+ key = sort_entries[0]->key;
+ for (i = 1; i < n_entries; i++) {
+ if (!memcmp(sort_entries[i]->key, key, key_size)) {
+ dups++; total_dups++;
+ err = merge_dup(sort_entries, i - dups, i);
+ if (err)
+ return err;
+ continue;
+ }
+ key = sort_entries[i]->key;
+ dups = 0;
+ }
+
+ if (!total_dups)
+ return total_dups;
+
+ for (i = 0, j = 0; i < n_entries; i++) {
+ if (!sort_entries[i]->dup) {
+ sort_entries[j] = sort_entries[i];
+ if (j++ != i)
+ sort_entries[i] = NULL;
+ } else {
+ destroy_sort_entry(sort_entries[i]);
+ sort_entries[i] = NULL;
+ }
+ }
+
+ return total_dups;
+}
+
+static bool is_key(struct tracing_map *map, unsigned int field_idx)
+{
+ unsigned int i;
+
+ for (i = 0; i < map->n_keys; i++)
+ if (map->key_idx[i] == field_idx)
+ return true;
+ return false;
+}
+
+static void sort_secondary(struct tracing_map *map,
+ const struct tracing_map_sort_entry **entries,
+ unsigned int n_entries,
+ struct tracing_map_sort_key *primary_key,
+ struct tracing_map_sort_key *secondary_key)
+{
+ int (*primary_fn)(const struct tracing_map_sort_entry **,
+ const struct tracing_map_sort_entry **);
+ int (*secondary_fn)(const struct tracing_map_sort_entry **,
+ const struct tracing_map_sort_entry **);
+ unsigned i, start = 0, n_sub = 1;
+
+ if (is_key(map, primary_key->field_idx))
+ primary_fn = cmp_entries_key;
+ else
+ primary_fn = cmp_entries_sum;
+
+ if (is_key(map, secondary_key->field_idx))
+ secondary_fn = cmp_entries_key;
+ else
+ secondary_fn = cmp_entries_sum;
+
+ for (i = 0; i < n_entries - 1; i++) {
+ const struct tracing_map_sort_entry **a = &entries[i];
+ const struct tracing_map_sort_entry **b = &entries[i + 1];
+
+ if (primary_fn(a, b) == 0) {
+ n_sub++;
+ if (i < n_entries - 2)
+ continue;
+ }
+
+ if (n_sub < 2) {
+ start = i + 1;
+ n_sub = 1;
+ continue;
+ }
+
+ set_sort_key(map, secondary_key);
+ sort(&entries[start], n_sub,
+ sizeof(struct tracing_map_sort_entry *),
+ (int (*)(const void *, const void *))secondary_fn, NULL);
+ set_sort_key(map, primary_key);
+
+ start = i + 1;
+ n_sub = 1;
+ }
+}
+
+/**
+ * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
+ * @map: The tracing_map
+ * @sort_key: The sort key to use for sorting
+ * @sort_entries: outval: pointer to allocated and sorted array of entries
+ *
+ * tracing_map_sort_entries() sorts the current set of entries in the
+ * map and returns the list of tracing_map_sort_entries containing
+ * them to the client in the sort_entries param. The client can
+ * access the struct tracing_map_elt element of interest directly as
+ * the 'elt' field of a returned struct tracing_map_sort_entry object.
+ *
+ * The sort_key has only two fields: idx and descending. 'idx' refers
+ * to the index of the field added via tracing_map_add_sum_field() or
+ * tracing_map_add_key_field() when the tracing_map was initialized.
+ * 'descending' is a flag that if set reverses the sort order, which
+ * by default is ascending.
+ *
+ * The client should not hold on to the returned array but should use
+ * it and call tracing_map_destroy_sort_entries() when done.
+ *
+ * Return: the number of sort_entries in the struct tracing_map_sort_entry
+ * array, negative on error
+ */
+int tracing_map_sort_entries(struct tracing_map *map,
+ struct tracing_map_sort_key *sort_keys,
+ unsigned int n_sort_keys,
+ struct tracing_map_sort_entry ***sort_entries)
+{
+ int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
+ const struct tracing_map_sort_entry **);
+ struct tracing_map_sort_entry *sort_entry, **entries;
+ int i, n_entries, ret;
+
+ entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
+ if (!entries)
+ return -ENOMEM;
+
+ for (i = 0, n_entries = 0; i < map->map_size; i++) {
+ if (!map->map[i].key || !map->map[i].val)
+ continue;
+
+ entries[n_entries] = create_sort_entry(map->map[i].val->key,
+ map->map[i].val);
+ if (!entries[n_entries++]) {
+ ret = -ENOMEM;
+ goto free;
+ }
+ }
+
+ if (n_entries == 0) {
+ ret = 0;
+ goto free;
+ }
+
+ if (n_entries == 1) {
+ *sort_entries = entries;
+ return 1;
+ }
+
+ ret = merge_dups(entries, n_entries, map->key_size);
+ if (ret < 0)
+ goto free;
+ n_entries -= ret;
+
+ if (is_key(map, sort_keys[0].field_idx))
+ cmp_entries_fn = cmp_entries_key;
+ else
+ cmp_entries_fn = cmp_entries_sum;
+
+ set_sort_key(map, &sort_keys[0]);
+
+ sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
+ (int (*)(const void *, const void *))cmp_entries_fn, NULL);
+
+ if (n_sort_keys > 1)
+ sort_secondary(map,
+ (const struct tracing_map_sort_entry **)entries,
+ n_entries,
+ &sort_keys[0],
+ &sort_keys[1]);
+
+ *sort_entries = entries;
+
+ return n_entries;
+ free:
+ tracing_map_destroy_sort_entries(entries, n_entries);
+
+ return ret;
+}
diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
new file mode 100644
index 0000000..2e63c5c
--- /dev/null
+++ b/kernel/trace/tracing_map.h
@@ -0,0 +1,258 @@
+#ifndef __TRACING_MAP_H
+#define __TRACING_MAP_H
+
+#define TRACING_MAP_BITS_DEFAULT 11
+#define TRACING_MAP_BITS_MAX 17
+#define TRACING_MAP_BITS_MIN 7
+
+#define TRACING_MAP_FIELDS_MAX 4
+#define TRACING_MAP_KEYS_MAX 2
+
+#define TRACING_MAP_SORT_KEYS_MAX 2
+
+typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
+
+/*
+ * This is an overview of the tracing_map data structures and how they
+ * relate to the tracing_map API. The details of the algorithms
+ * aren't discussed here - this is just a general overview of the data
+ * structures and how they interact with the API.
+ *
+ * The central data structure of the tracing_map is an initially
+ * zeroed array of struct tracing_map_entry (stored in the map field
+ * of struct tracing_map). tracing_map_entry is a very simple data
+ * structure containing only two fields: a 32-bit unsigned 'key'
+ * variable and a pointer named 'val'. This array of struct
+ * tracing_map_entry is essentially a hash table which will be
+ * modified by a single function, tracing_map_insert(), but which can
+ * be traversed and read by a user at any time (though the user does
+ * this indirectly via an array of tracing_map_sort_entry - see the
+ * explanation of that data structure in the discussion of the
+ * sorting-related data structures below).
+ *
+ * The central function of the tracing_map API is
+ * tracing_map_insert(). tracing_map_insert() hashes the
+ * arbitrarily-sized key passed into it into a 32-bit unsigned key.
+ * It then uses this key, truncated to the array size, as an index
+ * into the array of tracing_map_entries. If the value of the 'key'
+ * field of the tracing_map_entry found at that location is 0, then
+ * that entry is considered to be free and can be claimed, by
+ * replacing the 0 in the 'key' field of the tracing_map_entry with
+ * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
+ * 'val' field is then used to store a unique element which will be
+ * forever associated with that 32-bit hashed key in the
+ * tracing_map_entry.
+ *
+ * That unique element now in the tracing_map_entry's 'val' field is
+ * an instance of tracing_map_elt, where 'elt' in the latter part of
+ * that variable name is short for 'element'. The purpose of a
+ * tracing_map_elt is to hold values specific to the the particular
+ * 32-bit hashed key it's assocated with. Things such as the unique
+ * set of aggregated sums associated with the 32-bit hashed key, along
+ * with a copy of the full key associated with the entry, and which
+ * was used to produce the 32-bit hashed key.
+ *
+ * When tracing_map_create() is called to create the tracing map, the
+ * user specifies (indirectly via the map_bits param, the details are
+ * unimportant for this discussion) the maximum number of elements
+ * that the map can hold (stored in the max_elts field of struct
+ * tracing_map). This is the maximum possible number of
+ * tracing_map_entries in the tracing_map_entry array which can be
+ * 'claimed' as described in the above discussion, and therefore is
+ * also the maximum number of tracing_map_elts that can be associated
+ * with the tracing_map_entry array in the tracing_map. Because of
+ * the way the insertion algorithm works, the size of the allocated
+ * tracing_map_entry array is always twice the maximum number of
+ * elements (2 * max_elts). This value is stored in the map_size
+ * field of struct tracing_map.
+ *
+ * Because tracing_map_insert() needs to work from any context,
+ * including from within the memory allocation functions themselves,
+ * both the tracing_map_entry array and a pool of max_elts
+ * tracing_map_elts are pre-allocated before any call is made to
+ * tracing_map_insert().
+ *
+ * The tracing_map_entry array is allocated as a single block by
+ * tracing_map_create().
+ *
+ * Because the tracing_map_elts are much larger objects and can't
+ * generally be allocated together as a single large array without
+ * failure, they're allocated individually, by tracing_map_init().
+ *
+ * The pool of tracing_map_elts are allocated by tracing_map_init()
+ * rather than by tracing_map_create() because at the time
+ * tracing_map_create() is called, there isn't enough information to
+ * create the tracing_map_elts. Specifically,the user first needs to
+ * tell the tracing_map implementation how many fields the
+ * tracing_map_elts contain, and which types of fields they are (key
+ * or sum). The user does this via the tracing_map_add_sum_field()
+ * and tracing_map_add_key_field() functions, following which the user
+ * calls tracing_map_init() to finish up the tracing map setup. The
+ * array holding the pointers which make up the pre-allocated pool of
+ * tracing_map_elts is allocated as a single block and is stored in
+ * the elts field of struct tracing_map.
+ *
+ * There is also a set of structures used for sorting that might
+ * benefit from some minimal explanation.
+ *
+ * struct tracing_map_sort_key is used to drive the sort at any given
+ * time. By 'any given time' we mean that a different
+ * tracing_map_sort_key will be used at different times depending on
+ * whether the sort currently being performed is a primary or a
+ * secondary sort.
+ *
+ * The sort key is very simple, consisting of the field index of the
+ * tracing_map_elt field to sort on (which the user saved when adding
+ * the field), and whether the sort should be done in an ascending or
+ * descending order.
+ *
+ * For the convenience of the sorting code, a tracing_map_sort_entry
+ * is created for each tracing_map_elt, again individually allocated
+ * to avoid failures that might be expected if allocated as a single
+ * large array of struct tracing_map_sort_entry.
+ * tracing_map_sort_entry instances are the objects expected by the
+ * various internal sorting functions, and are also what the user
+ * ultimately receives after calling tracing_map_sort_entries().
+ * Because it doesn't make sense for users to access an unordered and
+ * sparsely populated tracing_map directly, the
+ * tracing_map_sort_entries() function is provided so that users can
+ * retrieve a sorted list of all existing elements. In addition to
+ * the associated tracing_map_elt 'elt' field contained within the
+ * tracing_map_sort_entry, which is the object of interest to the
+ * user, tracing_map_sort_entry objects contain a number of additional
+ * fields which are used for caching and internal purposes and can
+ * safely be ignored.
+*/
+
+struct tracing_map_field {
+ tracing_map_cmp_fn_t cmp_fn;
+ union {
+ atomic64_t sum;
+ unsigned int offset;
+ };
+};
+
+struct tracing_map_elt {
+ struct tracing_map *map;
+ struct tracing_map_field *fields;
+ void *key;
+ void *private_data;
+};
+
+struct tracing_map_entry {
+ u32 key;
+ struct tracing_map_elt *val;
+};
+
+struct tracing_map_sort_key {
+ unsigned int field_idx;
+ bool descending;
+};
+
+struct tracing_map_sort_entry {
+ void *key;
+ struct tracing_map_elt *elt;
+ bool elt_copied;
+ bool dup;
+};
+
+struct tracing_map {
+ unsigned int key_size;
+ unsigned int map_bits;
+ unsigned int map_size;
+ unsigned int max_elts;
+ atomic_t next_elt;
+ struct tracing_map_elt **elts;
+ struct tracing_map_entry *map;
+ struct tracing_map_ops *ops;
+ void *private_data;
+ struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
+ unsigned int n_fields;
+ int key_idx[TRACING_MAP_KEYS_MAX];
+ unsigned int n_keys;
+ struct tracing_map_sort_key sort_key;
+};
+
+/**
+ * struct tracing_map_ops - callbacks for tracing_map
+ *
+ * The methods in this structure define callback functions for various
+ * operations on a tracing_map or objects related to a tracing_map.
+ *
+ * For a detailed description of tracing_map_elt objects please see
+ * the overview of tracing_map data structures at the beginning of
+ * this file.
+ *
+ * All the methods below are optional.
+ *
+ * @elt_alloc: When a tracing_map_elt is allocated, this function, if
+ * defined, will be called and gives clients the opportunity to
+ * allocate additional data and attach it to the element
+ * (tracing_map_elt->private_data is meant for that purpose).
+ * Element allocation occurs before tracing begins, when the
+ * tracing_map_init() call is made by client code.
+ *
+ * @elt_copy: At certain points in the lifetime of an element, it may
+ * need to be copied. The copy should include a copy of the
+ * client-allocated data, which can be copied into the 'to'
+ * element from the 'from' element.
+ *
+ * @elt_free: When a tracing_map_elt is freed, this function is called
+ * and allows client-allocated per-element data to be freed.
+ *
+ * @elt_clear: This callback allows per-element client-defined data to
+ * be cleared, if applicable.
+ *
+ * @elt_init: This callback allows per-element client-defined data to
+ * be initialized when used i.e. when the element is actually
+ * claimed by tracing_map_insert() in the context of the map
+ * insertion.
+ */
+struct tracing_map_ops {
+ int (*elt_alloc)(struct tracing_map_elt *elt);
+ void (*elt_copy)(struct tracing_map_elt *to,
+ struct tracing_map_elt *from);
+ void (*elt_free)(struct tracing_map_elt *elt);
+ void (*elt_clear)(struct tracing_map_elt *elt);
+ void (*elt_init)(struct tracing_map_elt *elt);
+};
+
+extern struct tracing_map *tracing_map_create(unsigned int map_bits,
+ unsigned int key_size,
+ struct tracing_map_ops *ops,
+ void *private_data);
+extern int tracing_map_init(struct tracing_map *map);
+
+extern int tracing_map_add_sum_field(struct tracing_map *map);
+extern int tracing_map_add_key_field(struct tracing_map *map,
+ unsigned int offset,
+ tracing_map_cmp_fn_t cmp_fn);
+
+extern void tracing_map_destroy(struct tracing_map *map);
+extern void tracing_map_clear(struct tracing_map *map);
+
+extern struct tracing_map_elt *
+tracing_map_insert(struct tracing_map *map, void *key);
+
+extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
+ int field_is_signed);
+extern int tracing_map_cmp_string(void *val_a, void *val_b);
+extern int tracing_map_cmp_none(void *val_a, void *val_b);
+
+extern void tracing_map_update_sum(struct tracing_map_elt *elt,
+ unsigned int i, u64 n);
+extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
+extern void tracing_map_set_field_descr(struct tracing_map *map,
+ unsigned int i,
+ unsigned int key_offset,
+ tracing_map_cmp_fn_t cmp_fn);
+extern int
+tracing_map_sort_entries(struct tracing_map *map,
+ struct tracing_map_sort_key *sort_keys,
+ unsigned int n_sort_keys,
+ struct tracing_map_sort_entry ***sort_entries);
+
+extern void
+tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
+ unsigned int n_entries);
+#endif /* __TRACING_MAP_H */
--
1.9.3

2015-07-16 17:23:39

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 08/22] tracing: Add 'hist' event trigger command

'hist' triggers allow users to continually aggregate trace events,
which can then be viewed afterwards by simply reading a 'hist' file
containing the aggregation in a human-readable format.

The basic idea is very simple and boils down to a mechanism whereby
trace events, rather than being exhaustively dumped in raw form and
viewed directly, are automatically 'compressed' into meaningful tables
completely defined by the user.

This is done strictly via single-line command-line commands and
without the aid of any kind of programming language or interpreter.

A surprising number of typical use cases can be accomplished by users
via this simple mechanism. In fact, a large number of the tasks that
users typically do using the more complicated script-based tracing
tools, at least during the initial stages of an investigation, can be
accomplished by simply specifying a set of keys and values to be used
in the creation of a hash table.

The Linux kernel trace event subsystem happens to provide an extensive
list of keys and values ready-made for such a purpose in the form of
the event format files associated with each trace event. By simply
consulting the format file for field names of interest and by plugging
them into the hist trigger command, users can create an endless number
of useful aggregations to help with investigating various properties
of the system. See Documentation/trace/events.txt for examples.

hist triggers are implemented on top of the existing event trigger
infrastructure, and as such are consistent with the existing triggers
from a user's perspective as well.

The basic syntax follows the existing trigger syntax. Users start an
aggregation by writing a 'hist' trigger to the event of interest's
trigger file:

# echo hist:keys=xxx [ if filter] > event/trigger

Once a hist trigger has been set up, by default it continually
aggregates every matching event into a hash table using the event key
and a value field named 'hitcount'.

To view the aggregation at any point in time, simply read the 'hist'
file in the same directory as the 'trigger' file:

# cat event/hist

The detailed syntax provides additional options for user control, and
is described exhaustively in Documentation/trace/events.txt and in the
virtual tracing/README file in the tracing subsystem.

Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/trace_events.h | 1 +
kernel/trace/Kconfig | 14 +
kernel/trace/Makefile | 1 +
kernel/trace/trace.c | 29 ++
kernel/trace/trace.h | 7 +
kernel/trace/trace_events.c | 4 +
kernel/trace/trace_events_hist.c | 832 ++++++++++++++++++++++++++++++++++++
kernel/trace/trace_events_trigger.c | 1 +
8 files changed, 889 insertions(+)
create mode 100644 kernel/trace/trace_events_hist.c

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d9b0f89..0faf48b 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -410,6 +410,7 @@ enum event_trigger_type {
ETT_SNAPSHOT = (1 << 1),
ETT_STACKTRACE = (1 << 2),
ETT_EVENT_ENABLE = (1 << 3),
+ ETT_EVENT_HIST = (1 << 4),
};

extern int filter_match_preds(struct event_filter *filter, void *rec);
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 3b9a48a..85f8025 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -528,6 +528,20 @@ config MMIOTRACE
See Documentation/trace/mmiotrace.txt.
If you are not helping to develop drivers, say N.

+config HIST_TRIGGERS
+ bool "Histogram triggers"
+ depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
+ help
+ Hist triggers allow one or more arbitrary trace event fields
+ to be aggregated into hash tables and dumped to stdout by
+ reading a debugfs/tracefs file. They're useful for
+ gathering quick and dirty (though precise) summaries of
+ event activity as an initial guide for further investigation
+ using more advanced tools.
+
+ See Documentation/trace/events.txt.
+ If in doubt, say N.
+
config MMIOTRACE_TEST
tristate "Test module for mmiotrace"
depends on MMIOTRACE && m
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index 3b26cfb..7faace3 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -54,6 +54,7 @@ obj-$(CONFIG_EVENT_TRACING) += trace_event_perf.o
endif
obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o
obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o
+obj-$(CONFIG_HIST_TRIGGERS) += trace_events_hist.o
obj-$(CONFIG_BPF_EVENTS) += bpf_trace.o
obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o
obj-$(CONFIG_TRACEPOINTS) += power-traces.o
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index abcbf7f..f6fdda2 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3767,6 +3767,9 @@ static const char readme_msg[] =
#ifdef CONFIG_TRACER_SNAPSHOT
"\t\t snapshot\n"
#endif
+#ifdef CONFIG_HIST_TRIGGERS
+ "\t\t hist (see below)\n"
+#endif
"\t example: echo traceoff > events/block/block_unplug/trigger\n"
"\t echo traceoff:3 > events/block/block_unplug/trigger\n"
"\t echo 'enable_event:kmem:kmalloc:3 if nr_rq > 1' > \\\n"
@@ -3782,6 +3785,32 @@ static const char readme_msg[] =
"\t To remove a trigger with a count:\n"
"\t echo '!<trigger>:0 > <system>/<event>/trigger\n"
"\t Filters can be ignored when removing a trigger.\n"
+#ifdef CONFIG_HIST_TRIGGERS
+ " hist trigger\t- If set, event hits are aggregated into a hash table\n"
+ "\t Format: hist:keys=<field1>\n"
+ "\t [:size=#entries]\n"
+ "\t [if <filter>]\n\n"
+ "\t When a matching event is hit, an entry is added to a hash\n"
+ "\t table using the key named. Keys correspond to fields in the\n"
+ "\t event's format description. On an event hit, the value of a\n"
+ "\t sum called 'hitcount' is incremented, which is simply a count\n"
+ "\t of event hits. Keys can be any field.\n\n"
+ "\t Reading the 'hist' file for the event will dump the hash\n"
+ "\t table in its entirety to stdout. Each printed hash table\n"
+ "\t entry is a simple list of the keys and values comprising the\n"
+ "\t entry; keys are printed first and are delineated by curly\n"
+ "\t braces, and are followed by the set of value fields for the\n"
+ "\t entry. Numeric fields are displayed as base-10 integers.\n"
+ "\t By default, the size of the hash table is 2048 entries. The\n"
+ "\t 'size' param can be used to specify more or fewer than that.\n"
+ "\t The units are in terms of hashtable entries - if a run uses\n"
+ "\t more entries than specified, the results will show the number\n"
+ "\t of 'drops', the number of hits that were ignored. The size\n"
+ "\t should be a power of 2 between 128 and 131072 (any non-\n"
+ "\t power-of-2 number specified will be rounded up).\n\n"
+ "\t The entries are sorted by 'hitcount' and the sort order is\n"
+ "\t 'ascending'.\n\n"
+#endif
;

static ssize_t
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 5e675b2..e6cb781 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1098,6 +1098,13 @@ extern struct mutex event_mutex;
extern struct list_head ftrace_events;

extern const struct file_operations event_trigger_fops;
+extern const struct file_operations event_hist_fops;
+
+#ifdef CONFIG_HIST_TRIGGERS
+extern int register_trigger_hist_cmd(void);
+#else
+static inline int register_trigger_hist_cmd(void) { return 0; }
+#endif

extern int register_trigger_cmds(void);
extern void clear_event_triggers(struct trace_array *tr);
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 404a372..2bf0465 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1628,6 +1628,10 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
trace_create_file("trigger", 0644, file->dir, file,
&event_trigger_fops);

+#ifdef CONFIG_HIST_TRIGGERS
+ trace_create_file("hist", 0444, file->dir, file,
+ &event_hist_fops);
+#endif
trace_create_file("format", 0444, file->dir, call,
&ftrace_event_format_fops);

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
new file mode 100644
index 0000000..ded348b
--- /dev/null
+++ b/kernel/trace/trace_events_hist.c
@@ -0,0 +1,832 @@
+/*
+ * trace_events_hist - trace event hist triggers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2015 Tom Zanussi <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/kallsyms.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+
+#include "tracing_map.h"
+#include "trace.h"
+
+struct hist_field;
+
+typedef u64 (*hist_field_fn_t) (struct hist_field *field, void *event);
+
+struct hist_field {
+ struct ftrace_event_field *field;
+ unsigned long flags;
+ hist_field_fn_t fn;
+ unsigned int size;
+};
+
+static u64 hist_field_counter(struct hist_field *field, void *event)
+{
+ return 1;
+}
+
+static u64 hist_field_string(struct hist_field *hist_field, void *event)
+{
+ char *addr = (char *)(event + hist_field->field->offset);
+
+ return (u64)addr;
+}
+
+#define DEFINE_HIST_FIELD_FN(type) \
+static u64 hist_field_##type(struct hist_field *hist_field, void *event)\
+{ \
+ type *addr = (type *)(event + hist_field->field->offset); \
+ \
+ return (u64)*addr; \
+}
+
+DEFINE_HIST_FIELD_FN(s64);
+DEFINE_HIST_FIELD_FN(u64);
+DEFINE_HIST_FIELD_FN(s32);
+DEFINE_HIST_FIELD_FN(u32);
+DEFINE_HIST_FIELD_FN(s16);
+DEFINE_HIST_FIELD_FN(u16);
+DEFINE_HIST_FIELD_FN(s8);
+DEFINE_HIST_FIELD_FN(u8);
+
+#define HITCOUNT_IDX 0
+#define HIST_KEY_MAX 1
+#define HIST_KEY_SIZE_MAX MAX_FILTER_STR_VAL
+
+enum hist_field_flags {
+ HIST_FIELD_HITCOUNT = 1,
+ HIST_FIELD_KEY = 2,
+ HIST_FIELD_STRING = 4,
+};
+
+struct hist_trigger_attrs {
+ char *keys_str;
+ unsigned int map_bits;
+};
+
+struct hist_trigger_data {
+ atomic64_t total_hits;
+ struct hist_field *fields[TRACING_MAP_FIELDS_MAX];
+ unsigned int n_vals;
+ unsigned int n_keys;
+ unsigned int n_fields;
+ unsigned int key_size;
+ struct tracing_map_sort_key sort_keys[TRACING_MAP_SORT_KEYS_MAX];
+ unsigned int n_sort_keys;
+ struct trace_event_file *event_file;
+ atomic64_t drops;
+ struct hist_trigger_attrs *attrs;
+ struct tracing_map *map;
+};
+
+static hist_field_fn_t select_value_fn(int field_size, int field_is_signed)
+{
+ hist_field_fn_t fn = NULL;
+
+ switch (field_size) {
+ case 8:
+ if (field_is_signed)
+ fn = hist_field_s64;
+ else
+ fn = hist_field_u64;
+ break;
+ case 4:
+ if (field_is_signed)
+ fn = hist_field_s32;
+ else
+ fn = hist_field_u32;
+ break;
+ case 2:
+ if (field_is_signed)
+ fn = hist_field_s16;
+ else
+ fn = hist_field_u16;
+ break;
+ case 1:
+ if (field_is_signed)
+ fn = hist_field_s8;
+ else
+ fn = hist_field_u8;
+ break;
+ }
+
+ return fn;
+}
+
+static int parse_map_size(char *str)
+{
+ unsigned long size, map_bits;
+ int ret;
+
+ strsep(&str, "=");
+ if (!str) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ret = kstrtoul(str, 0, &size);
+ if (ret)
+ goto out;
+
+ map_bits = ilog2(roundup_pow_of_two(size));
+ if (map_bits < TRACING_MAP_BITS_MIN ||
+ map_bits > TRACING_MAP_BITS_MAX)
+ ret = -EINVAL;
+ else
+ ret = map_bits;
+ out:
+ return ret;
+}
+
+static void destroy_hist_trigger_attrs(struct hist_trigger_attrs *attrs)
+{
+ kfree(attrs->keys_str);
+ kfree(attrs);
+}
+
+static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
+{
+ struct hist_trigger_attrs *attrs;
+ int ret = 0;
+
+ attrs = kzalloc(sizeof(*attrs), GFP_KERNEL);
+ if (!attrs)
+ return ERR_PTR(-ENOMEM);
+
+ while (trigger_str) {
+ char *str = strsep(&trigger_str, ":");
+
+ if (!strncmp(str, "keys", strlen("keys")) ||
+ !strncmp(str, "key", strlen("key")))
+ attrs->keys_str = kstrdup(str, GFP_KERNEL);
+ else if (!strncmp(str, "size", strlen("size"))) {
+ int map_bits = parse_map_size(str);
+
+ if (map_bits < 0) {
+ ret = map_bits;
+ goto free;
+ }
+ attrs->map_bits = map_bits;
+ } else {
+ ret = -EINVAL;
+ goto free;
+ }
+ }
+
+ return attrs;
+ free:
+ destroy_hist_trigger_attrs(attrs);
+
+ return ERR_PTR(ret);
+}
+
+static void destroy_hist_field(struct hist_field *hist_field)
+{
+ kfree(hist_field);
+}
+
+static struct hist_field *create_hist_field(struct ftrace_event_field *field,
+ unsigned long flags)
+{
+ struct hist_field *hist_field;
+
+ if (field && is_function_field(field))
+ return NULL;
+
+ hist_field = kzalloc(sizeof(struct hist_field), GFP_KERNEL);
+ if (!hist_field)
+ return NULL;
+
+ if (flags & HIST_FIELD_HITCOUNT) {
+ hist_field->fn = hist_field_counter;
+ goto out;
+ }
+
+ if (is_string_field(field)) {
+ flags |= HIST_FIELD_STRING;
+ hist_field->fn = hist_field_string;
+ } else {
+ hist_field->fn = select_value_fn(field->size,
+ field->is_signed);
+ if (!hist_field->fn) {
+ destroy_hist_field(hist_field);
+ return NULL;
+ }
+ }
+ out:
+ hist_field->field = field;
+ hist_field->flags = flags;
+
+ return hist_field;
+}
+
+static void destroy_hist_fields(struct hist_trigger_data *hist_data)
+{
+ unsigned int i;
+
+ for (i = 0; i < hist_data->n_fields; i++) {
+ destroy_hist_field(hist_data->fields[i]);
+ hist_data->fields[i] = NULL;
+ }
+}
+
+static int create_hitcount_val(struct hist_trigger_data *hist_data)
+{
+ hist_data->fields[HITCOUNT_IDX] =
+ create_hist_field(NULL, HIST_FIELD_HITCOUNT);
+ if (!hist_data->fields[HITCOUNT_IDX])
+ return -ENOMEM;
+
+ hist_data->n_vals++;
+
+ return 0;
+}
+
+static int create_val_fields(struct hist_trigger_data *hist_data,
+ struct trace_event_file *file)
+{
+ int ret;
+
+ ret = create_hitcount_val(hist_data);
+
+ return ret;
+}
+
+static int create_key_field(struct hist_trigger_data *hist_data,
+ unsigned int key_idx,
+ struct trace_event_file *file,
+ char *field_str)
+{
+ struct ftrace_event_field *field = NULL;
+ unsigned long flags = 0;
+ unsigned int key_size;
+ int ret = 0;
+
+ flags |= HIST_FIELD_KEY;
+
+ field = trace_find_event_field(file->event_call, field_str);
+ if (!field) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ key_size = field->size;
+
+ hist_data->fields[key_idx] = create_hist_field(field, flags);
+ if (!hist_data->fields[key_idx]) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ key_size = ALIGN(key_size, sizeof(u64));
+ hist_data->fields[key_idx]->size = key_size;
+ hist_data->key_size = key_size;
+ if (hist_data->key_size > HIST_KEY_SIZE_MAX) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ hist_data->n_keys++;
+ ret = key_size;
+ out:
+ return ret;
+}
+
+static int create_key_fields(struct hist_trigger_data *hist_data,
+ struct trace_event_file *file)
+{
+ unsigned int i, n_vals = hist_data->n_vals;
+ char *fields_str, *field_str;
+ int ret = -EINVAL;
+
+ fields_str = hist_data->attrs->keys_str;
+ if (!fields_str)
+ goto out;
+
+ strsep(&fields_str, "=");
+ if (!fields_str)
+ goto out;
+
+ for (i = n_vals; i < n_vals + HIST_KEY_MAX; i++) {
+ field_str = strsep(&fields_str, ",");
+ if (!field_str)
+ break;
+ ret = create_key_field(hist_data, i, file, field_str);
+ if (ret < 0)
+ goto out;
+ }
+ if (fields_str) {
+ ret = -EINVAL;
+ goto out;
+ }
+ ret = 0;
+ out:
+ return ret;
+}
+
+static int create_hist_fields(struct hist_trigger_data *hist_data,
+ struct trace_event_file *file)
+{
+ int ret;
+
+ ret = create_val_fields(hist_data, file);
+ if (ret)
+ goto out;
+
+ ret = create_key_fields(hist_data, file);
+ if (ret)
+ goto out;
+
+ hist_data->n_fields = hist_data->n_vals + hist_data->n_keys;
+ out:
+ return ret;
+}
+
+static int create_sort_keys(struct hist_trigger_data *hist_data)
+{
+ int ret = 0;
+
+ hist_data->n_sort_keys = 1; /* sort_keys[0] is always hitcount */
+
+ return ret;
+}
+
+static void destroy_hist_data(struct hist_trigger_data *hist_data)
+{
+ destroy_hist_trigger_attrs(hist_data->attrs);
+ destroy_hist_fields(hist_data);
+ tracing_map_destroy(hist_data->map);
+ kfree(hist_data);
+}
+
+static int create_tracing_map_fields(struct hist_trigger_data *hist_data)
+{
+ struct tracing_map *map = hist_data->map;
+ struct ftrace_event_field *field;
+ struct hist_field *hist_field;
+ unsigned int i, idx;
+
+ for (i = 0; i < hist_data->n_fields; i++) {
+ hist_field = hist_data->fields[i];
+ if (hist_field->flags & HIST_FIELD_KEY) {
+ tracing_map_cmp_fn_t cmp_fn;
+
+ field = hist_field->field;
+
+ if (is_string_field(field))
+ cmp_fn = tracing_map_cmp_string;
+ else
+ cmp_fn = tracing_map_cmp_num(field->size,
+ field->is_signed);
+ idx = tracing_map_add_key_field(map, 0, cmp_fn);
+ } else
+ idx = tracing_map_add_sum_field(map);
+
+ if (idx < 0)
+ return idx;
+ }
+
+ return 0;
+}
+
+static struct hist_trigger_data *
+create_hist_data(unsigned int map_bits,
+ struct hist_trigger_attrs *attrs,
+ struct trace_event_file *file)
+{
+ struct hist_trigger_data *hist_data;
+ int ret = 0;
+
+ hist_data = kzalloc(sizeof(*hist_data), GFP_KERNEL);
+ if (!hist_data)
+ return ERR_PTR(-ENOMEM);
+
+ hist_data->attrs = attrs;
+
+ ret = create_hist_fields(hist_data, file);
+ if (ret < 0)
+ goto free;
+
+ ret = create_sort_keys(hist_data);
+ if (ret < 0)
+ goto free;
+
+ hist_data->map = tracing_map_create(map_bits, hist_data->key_size,
+ NULL, hist_data);
+ if (IS_ERR(hist_data->map)) {
+ ret = PTR_ERR(hist_data->map);
+ hist_data->map = NULL;
+ goto free;
+ }
+
+ ret = create_tracing_map_fields(hist_data);
+ if (ret)
+ goto free;
+
+ ret = tracing_map_init(hist_data->map);
+ if (ret)
+ goto free;
+
+ hist_data->event_file = file;
+ out:
+ return hist_data;
+ free:
+ destroy_hist_data(hist_data);
+ if (ret)
+ hist_data = ERR_PTR(ret);
+ else
+ hist_data = NULL;
+
+ goto out;
+}
+
+static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
+ struct tracing_map_elt *elt,
+ void *rec)
+{
+ struct hist_field *hist_field;
+ unsigned int i;
+ u64 hist_val;
+
+ for (i = 0; i < hist_data->n_vals; i++) {
+ hist_field = hist_data->fields[i];
+ hist_val = hist_field->fn(hist_field, rec);
+ tracing_map_update_sum(elt, i, hist_val);
+ }
+}
+
+static void event_hist_trigger(struct event_trigger_data *data, void *rec)
+{
+ struct hist_trigger_data *hist_data = data->private_data;
+ struct hist_field *key_field;
+ struct tracing_map_elt *elt;
+ u64 field_contents;
+ void *key = NULL;
+ unsigned int i;
+
+ if (atomic64_read(&hist_data->drops)) {
+ atomic64_inc(&hist_data->drops);
+ return;
+ }
+
+ for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
+ key_field = hist_data->fields[i];
+
+ field_contents = key_field->fn(key_field, rec);
+ if (key_field->flags & HIST_FIELD_STRING)
+ key = (void *)field_contents;
+ else
+ key = (void *)&field_contents;
+ }
+
+ elt = tracing_map_insert(hist_data->map, key);
+ if (elt)
+ hist_trigger_elt_update(hist_data, elt, rec);
+ else
+ atomic64_inc(&hist_data->drops);
+
+ atomic64_inc(&hist_data->total_hits);
+}
+
+static void
+hist_trigger_entry_print(struct seq_file *m,
+ struct hist_trigger_data *hist_data, void *key,
+ struct tracing_map_elt *elt)
+{
+ struct hist_field *key_field;
+ unsigned int i;
+ u64 uval;
+
+ seq_puts(m, "{ ");
+
+ for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
+ key_field = hist_data->fields[i];
+
+ if (i > hist_data->n_vals)
+ seq_puts(m, ", ");
+
+ if (key_field->flags & HIST_FIELD_STRING) {
+ seq_printf(m, "%s: %-35s", key_field->field->name,
+ (char *)key);
+ } else {
+ uval = *(u64 *)key;
+ seq_printf(m, "%s: %10llu",
+ key_field->field->name, uval);
+ }
+ }
+
+ seq_puts(m, " }");
+
+ seq_printf(m, " hitcount: %10llu",
+ tracing_map_read_sum(elt, HITCOUNT_IDX));
+
+ seq_puts(m, "\n");
+}
+
+static int print_entries(struct seq_file *m,
+ struct hist_trigger_data *hist_data)
+{
+ struct tracing_map_sort_entry **sort_entries = NULL;
+ struct tracing_map *map = hist_data->map;
+ unsigned int i, n_entries;
+
+ n_entries = tracing_map_sort_entries(map, hist_data->sort_keys,
+ hist_data->n_sort_keys,
+ &sort_entries);
+ if (n_entries < 0)
+ return n_entries;
+
+ for (i = 0; i < n_entries; i++)
+ hist_trigger_entry_print(m, hist_data,
+ sort_entries[i]->key,
+ sort_entries[i]->elt);
+
+ tracing_map_destroy_sort_entries(sort_entries, n_entries);
+
+ return n_entries;
+}
+
+static int hist_show(struct seq_file *m, void *v)
+{
+ struct event_trigger_data *test, *data = NULL;
+ struct trace_event_file *event_file;
+ struct hist_trigger_data *hist_data;
+ int n_entries, ret = 0;
+
+ mutex_lock(&event_mutex);
+
+ event_file = event_file_data(m->private);
+ if (unlikely(!event_file)) {
+ ret = -ENODEV;
+ goto out_unlock;
+ }
+
+ list_for_each_entry_rcu(test, &event_file->triggers, list) {
+ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
+ data = test;
+ break;
+ }
+ }
+ if (!data)
+ goto out_unlock;
+
+ seq_puts(m, "# trigger info: ");
+ data->ops->print(m, data->ops, data);
+ seq_puts(m, "\n");
+
+ hist_data = data->private_data;
+ n_entries = print_entries(m, hist_data);
+ if (n_entries < 0) {
+ ret = n_entries;
+ n_entries = 0;
+ }
+
+ seq_printf(m, "\nTotals:\n Hits: %lu\n Entries: %u\n Dropped: %lu\n",
+ atomic64_read(&hist_data->total_hits),
+ n_entries, atomic64_read(&hist_data->drops));
+ out_unlock:
+ mutex_unlock(&event_mutex);
+
+ return ret;
+}
+
+static int event_hist_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, hist_show, file);
+}
+
+const struct file_operations event_hist_fops = {
+ .open = event_hist_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
+{
+ seq_printf(m, "%s", hist_field->field->name);
+}
+
+static int event_hist_trigger_print(struct seq_file *m,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *data)
+{
+ struct hist_trigger_data *hist_data = data->private_data;
+ struct hist_field *key_field;
+ unsigned int i;
+
+ seq_puts(m, "hist:keys=");
+
+ for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
+ key_field = hist_data->fields[i];
+
+ if (i > hist_data->n_vals)
+ seq_puts(m, ",");
+
+ hist_field_print(m, key_field);
+ }
+
+ seq_puts(m, ":vals=");
+ seq_puts(m, "hitcount");
+
+ seq_puts(m, ":sort=");
+ seq_puts(m, "hitcount");
+
+ seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));
+
+ if (data->filter_str)
+ seq_printf(m, " if %s", data->filter_str);
+
+ seq_puts(m, " [active]");
+
+ seq_putc(m, '\n');
+
+ return 0;
+}
+
+static void event_hist_trigger_free(struct event_trigger_ops *ops,
+ struct event_trigger_data *data)
+{
+ struct hist_trigger_data *hist_data = data->private_data;
+
+ if (WARN_ON_ONCE(data->ref <= 0))
+ return;
+
+ data->ref--;
+ if (!data->ref) {
+ trigger_data_free(data);
+ destroy_hist_data(hist_data);
+ }
+}
+
+static struct event_trigger_ops event_hist_trigger_ops = {
+ .func = event_hist_trigger,
+ .print = event_hist_trigger_print,
+ .init = event_trigger_init,
+ .free = event_hist_trigger_free,
+};
+
+static struct event_trigger_ops *event_hist_get_trigger_ops(char *cmd,
+ char *param)
+{
+ return &event_hist_trigger_ops;
+}
+
+static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
+ struct event_trigger_data *data,
+ struct trace_event_file *file)
+{
+ struct event_trigger_data *test;
+ int ret = 0;
+
+ list_for_each_entry_rcu(test, &file->triggers, list) {
+ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
+ ret = -EEXIST;
+ goto out;
+ }
+ }
+
+ if (data->ops->init) {
+ ret = data->ops->init(data->ops, data);
+ if (ret < 0)
+ goto out;
+ }
+
+ list_add_rcu(&data->list, &file->triggers);
+ ret++;
+
+ update_cond_flag(file);
+ if (trace_event_trigger_enable_disable(file, 1) < 0) {
+ list_del_rcu(&data->list);
+ update_cond_flag(file);
+ ret--;
+ }
+ out:
+ return ret;
+}
+
+static int event_hist_trigger_func(struct event_command *cmd_ops,
+ struct trace_event_file *file,
+ char *glob, char *cmd, char *param)
+{
+ unsigned int hist_trigger_bits = TRACING_MAP_BITS_DEFAULT;
+ struct event_trigger_data *trigger_data;
+ struct hist_trigger_attrs *attrs;
+ struct event_trigger_ops *trigger_ops;
+ struct hist_trigger_data *hist_data;
+ char *trigger;
+ int ret = 0;
+
+ if (!param)
+ return -EINVAL;
+
+ /* separate the trigger from the filter (k:v [if filter]) */
+ trigger = strsep(&param, " \t");
+ if (!trigger)
+ return -EINVAL;
+
+ attrs = parse_hist_trigger_attrs(trigger);
+ if (IS_ERR(attrs))
+ return PTR_ERR(attrs);
+
+ if (!attrs->keys_str)
+ return -EINVAL;
+
+ if (attrs->map_bits)
+ hist_trigger_bits = attrs->map_bits;
+
+ hist_data = create_hist_data(hist_trigger_bits, attrs, file);
+ if (IS_ERR(hist_data))
+ return PTR_ERR(hist_data);
+
+ trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
+
+ ret = -ENOMEM;
+ trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
+ if (!trigger_data)
+ goto out;
+
+ trigger_data->count = -1;
+ trigger_data->ops = trigger_ops;
+ trigger_data->cmd_ops = cmd_ops;
+
+ INIT_LIST_HEAD(&trigger_data->list);
+ RCU_INIT_POINTER(trigger_data->filter, NULL);
+
+ trigger_data->private_data = hist_data;
+
+ if (glob[0] == '!') {
+ cmd_ops->unreg(glob+1, trigger_ops, trigger_data, file);
+ ret = 0;
+ goto out_free;
+ }
+
+ if (!param) /* if param is non-empty, it's supposed to be a filter */
+ goto out_reg;
+
+ if (!cmd_ops->set_filter)
+ goto out_reg;
+
+ ret = cmd_ops->set_filter(param, trigger_data, file);
+ if (ret < 0)
+ goto out_free;
+ out_reg:
+ ret = cmd_ops->reg(glob, trigger_ops, trigger_data, file);
+ /*
+ * The above returns on success the # of triggers registered,
+ * but if it didn't register any it returns zero. Consider no
+ * triggers registered a failure too.
+ */
+ if (!ret) {
+ ret = -ENOENT;
+ goto out_free;
+ } else if (ret < 0)
+ goto out_free;
+ /* Just return zero, not the number of registered triggers */
+ ret = 0;
+ out:
+ return ret;
+ out_free:
+ if (cmd_ops->set_filter)
+ cmd_ops->set_filter(NULL, trigger_data, NULL);
+
+ kfree(trigger_data);
+
+ destroy_hist_data(hist_data);
+ goto out;
+}
+
+static struct event_command trigger_hist_cmd = {
+ .name = "hist",
+ .trigger_type = ETT_EVENT_HIST,
+ .post_trigger = true, /* need non-NULL rec */
+ .func = event_hist_trigger_func,
+ .reg = hist_register_trigger,
+ .unreg = unregister_trigger,
+ .get_trigger_ops = event_hist_get_trigger_ops,
+ .set_filter = set_trigger_filter,
+};
+
+__init int register_trigger_hist_cmd(void)
+{
+ int ret;
+
+ ret = register_event_command(&trigger_hist_cmd);
+ WARN_ON(ret < 0);
+
+ return ret;
+}
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 5f632ff..e80f30b 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -1437,6 +1437,7 @@ __init int register_trigger_cmds(void)
register_trigger_snapshot_cmd();
register_trigger_stacktrace_cmd();
register_trigger_enable_disable_cmds();
+ register_trigger_hist_cmd();

return 0;
}
--
1.9.3

2015-07-16 17:27:22

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 09/22] tracing: Add hist trigger support for multiple values ('vals=' param)

Allow users to specify trace event fields to use in aggregated sums
via a new 'vals=' keyword. Before this addition, the only aggregated
sum supported was the implied value 'hitcount'. With this addition,
'hitcount' is also supported as an explicit value field, as is any
numeric trace event field.

This expands the hist trigger syntax from this:

# echo hist:keys=xxx [ if filter] > event/trigger

to this:

# echo hist:keys=xxx:vals=yyy [ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 11 ++++--
kernel/trace/trace_events_hist.c | 75 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 82 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index f6fdda2..8109b89 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3788,12 +3788,17 @@ static const char readme_msg[] =
#ifdef CONFIG_HIST_TRIGGERS
" hist trigger\t- If set, event hits are aggregated into a hash table\n"
"\t Format: hist:keys=<field1>\n"
+ "\t [:values=<field1[,field2,...]]\n"
"\t [:size=#entries]\n"
"\t [if <filter>]\n\n"
"\t When a matching event is hit, an entry is added to a hash\n"
- "\t table using the key named. Keys correspond to fields in the\n"
- "\t event's format description. On an event hit, the value of a\n"
- "\t sum called 'hitcount' is incremented, which is simply a count\n"
+ "\t table using the key and value(s) named. Keys and values\n"
+ "\t correspond to fields in the event's format description.\n"
+ "\t Values must correspond to numeric fields - on an event hit,\n"
+ "\t the value(s) will be added to a sum kept for that field.\n"
+ "\t The special string 'hitcount' can be used in place of an\n"
+ "\t explicit value field - this is simply a count of event hits.\n"
+ "\t If 'values' is not specified, 'hitcount' will be assumed.\n"
"\t of event hits. Keys can be any field.\n\n"
"\t Reading the 'hist' file for the event will dump the hash\n"
"\t table in its entirety to stdout. Each printed hash table\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index ded348b..503df07 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -75,6 +75,7 @@ enum hist_field_flags {

struct hist_trigger_attrs {
char *keys_str;
+ char *vals_str;
unsigned int map_bits;
};

@@ -155,6 +156,7 @@ static int parse_map_size(char *str)
static void destroy_hist_trigger_attrs(struct hist_trigger_attrs *attrs)
{
kfree(attrs->keys_str);
+ kfree(attrs->vals_str);
kfree(attrs);
}

@@ -173,6 +175,10 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
if (!strncmp(str, "keys", strlen("keys")) ||
!strncmp(str, "key", strlen("key")))
attrs->keys_str = kstrdup(str, GFP_KERNEL);
+ else if (!strncmp(str, "values", strlen("values")) ||
+ !strncmp(str, "vals", strlen("vals")) ||
+ !strncmp(str, "val", strlen("val")))
+ attrs->vals_str = kstrdup(str, GFP_KERNEL);
else if (!strncmp(str, "size", strlen("size"))) {
int map_bits = parse_map_size(str);

@@ -256,13 +262,66 @@ static int create_hitcount_val(struct hist_trigger_data *hist_data)
return 0;
}

+static int create_val_field(struct hist_trigger_data *hist_data,
+ unsigned int val_idx,
+ struct trace_event_file *file,
+ char *field_str)
+{
+ struct ftrace_event_field *field = NULL;
+ unsigned long flags = 0;
+ int ret = 0;
+
+ field = trace_find_event_field(file->event_call, field_str);
+ if (!field) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ hist_data->fields[val_idx] = create_hist_field(field, flags);
+ if (!hist_data->fields[val_idx]) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ hist_data->n_vals++;
+ out:
+ return ret;
+}
+
static int create_val_fields(struct hist_trigger_data *hist_data,
struct trace_event_file *file)
{
+ unsigned int vals_max = TRACING_MAP_FIELDS_MAX - TRACING_MAP_KEYS_MAX;
+ char *fields_str, *field_str;
+ unsigned int i, j;
int ret;

ret = create_hitcount_val(hist_data);
+ if (ret)
+ goto out;

+ fields_str = hist_data->attrs->vals_str;
+ if (!fields_str)
+ goto out;
+
+ strsep(&fields_str, "=");
+ if (!fields_str)
+ goto out;
+
+ vals_max = TRACING_MAP_FIELDS_MAX - TRACING_MAP_KEYS_MAX;
+
+ for (i = 0, j = 1; i < vals_max; i++) {
+ field_str = strsep(&fields_str, ",");
+ if (!field_str)
+ break;
+ if (!strcmp(field_str, "hitcount"))
+ continue;
+ ret = create_val_field(hist_data, j++, file, field_str);
+ if (ret)
+ goto out;
+ }
+ if (fields_str)
+ ret = -EINVAL;
+ out:
return ret;
}

@@ -534,6 +593,12 @@ hist_trigger_entry_print(struct seq_file *m,
seq_printf(m, " hitcount: %10llu",
tracing_map_read_sum(elt, HITCOUNT_IDX));

+ for (i = 1; i < hist_data->n_vals; i++) {
+ seq_printf(m, " %s: %10llu",
+ hist_data->fields[i]->field->name,
+ tracing_map_read_sum(elt, i));
+ }
+
seq_puts(m, "\n");
}

@@ -641,7 +706,15 @@ static int event_hist_trigger_print(struct seq_file *m,
}

seq_puts(m, ":vals=");
- seq_puts(m, "hitcount");
+
+ for (i = 0; i < hist_data->n_vals; i++) {
+ if (i == 0)
+ seq_puts(m, "hitcount");
+ else {
+ seq_puts(m, ",");
+ hist_field_print(m, hist_data->fields[i]);
+ }
+ }

seq_puts(m, ":sort=");
seq_puts(m, "hitcount");
--
1.9.3

2015-07-16 17:23:48

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 10/22] tracing: Add hist trigger support for compound keys

Allow users to specify multiple trace event fields to use in keys by
allowing multiple fields in the 'keys=' keyword. With this addition,
any unique combination of any of the fields named in the 'keys'
keyword will result in a new entry being added to the hash table.

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 8 +++++---
kernel/trace/trace_events_hist.c | 40 ++++++++++++++++++++++++++++++----------
2 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8109b89..1e4801e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3787,19 +3787,21 @@ static const char readme_msg[] =
"\t Filters can be ignored when removing a trigger.\n"
#ifdef CONFIG_HIST_TRIGGERS
" hist trigger\t- If set, event hits are aggregated into a hash table\n"
- "\t Format: hist:keys=<field1>\n"
+ "\t Format: hist:keys=<field1>[,field2,...]\n"
"\t [:values=<field1[,field2,...]]\n"
"\t [:size=#entries]\n"
"\t [if <filter>]\n\n"
"\t When a matching event is hit, an entry is added to a hash\n"
- "\t table using the key and value(s) named. Keys and values\n"
+ "\t table using the key(s) and value(s) named. Keys and values\n"
"\t correspond to fields in the event's format description.\n"
"\t Values must correspond to numeric fields - on an event hit,\n"
"\t the value(s) will be added to a sum kept for that field.\n"
"\t The special string 'hitcount' can be used in place of an\n"
"\t explicit value field - this is simply a count of event hits.\n"
"\t If 'values' is not specified, 'hitcount' will be assumed.\n"
- "\t of event hits. Keys can be any field.\n\n"
+ "\t of event hits. Keys can be any field. Compound keys\n"
+ "\t consisting of up to two fields can be specified by the 'keys'\n"
+ "\t keyword.\n\n"
"\t Reading the 'hist' file for the event will dump the hash\n"
"\t table in its entirety to stdout. Each printed hash table\n"
"\t entry is a simple list of the keys and values comprising the\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 503df07..3d5433a 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -32,6 +32,7 @@ struct hist_field {
unsigned long flags;
hist_field_fn_t fn;
unsigned int size;
+ unsigned int offset;
};

static u64 hist_field_counter(struct hist_field *field, void *event)
@@ -64,8 +65,8 @@ DEFINE_HIST_FIELD_FN(s8);
DEFINE_HIST_FIELD_FN(u8);

#define HITCOUNT_IDX 0
-#define HIST_KEY_MAX 1
-#define HIST_KEY_SIZE_MAX MAX_FILTER_STR_VAL
+#define HIST_KEY_MAX 2
+#define HIST_KEY_SIZE_MAX (MAX_FILTER_STR_VAL + sizeof(u64))

enum hist_field_flags {
HIST_FIELD_HITCOUNT = 1,
@@ -327,6 +328,7 @@ static int create_val_fields(struct hist_trigger_data *hist_data,

static int create_key_field(struct hist_trigger_data *hist_data,
unsigned int key_idx,
+ unsigned int key_offset,
struct trace_event_file *file,
char *field_str)
{
@@ -353,7 +355,8 @@ static int create_key_field(struct hist_trigger_data *hist_data,

key_size = ALIGN(key_size, sizeof(u64));
hist_data->fields[key_idx]->size = key_size;
- hist_data->key_size = key_size;
+ hist_data->fields[key_idx]->offset = key_offset;
+ hist_data->key_size += key_size;
if (hist_data->key_size > HIST_KEY_SIZE_MAX) {
ret = -EINVAL;
goto out;
@@ -368,7 +371,7 @@ static int create_key_field(struct hist_trigger_data *hist_data,
static int create_key_fields(struct hist_trigger_data *hist_data,
struct trace_event_file *file)
{
- unsigned int i, n_vals = hist_data->n_vals;
+ unsigned int i, key_offset = 0, n_vals = hist_data->n_vals;
char *fields_str, *field_str;
int ret = -EINVAL;

@@ -384,9 +387,11 @@ static int create_key_fields(struct hist_trigger_data *hist_data,
field_str = strsep(&fields_str, ",");
if (!field_str)
break;
- ret = create_key_field(hist_data, i, file, field_str);
+ ret = create_key_field(hist_data, i, key_offset,
+ file, field_str);
if (ret < 0)
goto out;
+ key_offset += ret;
}
if (fields_str) {
ret = -EINVAL;
@@ -451,7 +456,10 @@ static int create_tracing_map_fields(struct hist_trigger_data *hist_data)
else
cmp_fn = tracing_map_cmp_num(field->size,
field->is_signed);
- idx = tracing_map_add_key_field(map, 0, cmp_fn);
+ idx = tracing_map_add_key_field(map,
+ hist_field->offset,
+ cmp_fn);
+
} else
idx = tracing_map_add_sum_field(map);

@@ -531,6 +539,7 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
static void event_hist_trigger(struct event_trigger_data *data, void *rec)
{
struct hist_trigger_data *hist_data = data->private_data;
+ char compound_key[HIST_KEY_SIZE_MAX];
struct hist_field *key_field;
struct tracing_map_elt *elt;
u64 field_contents;
@@ -542,6 +551,9 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
return;
}

+ if (hist_data->n_keys > 1)
+ memset(compound_key, 0, hist_data->key_size);
+
for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
key_field = hist_data->fields[i];

@@ -550,8 +562,16 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
key = (void *)field_contents;
else
key = (void *)&field_contents;
+
+ if (hist_data->n_keys > 1) {
+ memcpy(compound_key + key_field->offset, key,
+ key_field->size);
+ }
}

+ if (hist_data->n_keys > 1)
+ key = compound_key;
+
elt = tracing_map_insert(hist_data->map, key);
if (elt)
hist_trigger_elt_update(hist_data, elt, rec);
@@ -580,11 +600,11 @@ hist_trigger_entry_print(struct seq_file *m,

if (key_field->flags & HIST_FIELD_STRING) {
seq_printf(m, "%s: %-35s", key_field->field->name,
- (char *)key);
+ (char *)(key + key_field->offset));
} else {
- uval = *(u64 *)key;
- seq_printf(m, "%s: %10llu",
- key_field->field->name, uval);
+ uval = *(u64 *)(key + key_field->offset);
+ seq_printf(m, "%s: %10llu", key_field->field->name,
+ uval);
}
}

--
1.9.3

2015-07-16 17:23:31

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 11/22] tracing: Add hist trigger support for user-defined sorting ('sort=' param)

Allow users to specify keys and/or values to sort on. With this
addition, keys and values specified using the 'keys=' and 'vals='
keywords can be used to sort the hist trigger output via a new 'sort='
keyword. If multiple sort keys are specified, the output will be
sorted using the second key as a secondary sort key, etc. The default
sort order is ascending; if the user wants a different sort order,
'.descending' can be appended to the specific sort key. Before this
addition, output was always sorted by 'hitcount' in ascending order.

This expands the hist trigger syntax from this:

# echo hist:keys=xxx:vals=yyy \
[ if filter] > event/trigger

to this:

# echo hist:keys=xxx:vals=yyy:sort=zzz.descending \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 10 ++--
kernel/trace/trace_events_hist.c | 101 ++++++++++++++++++++++++++++++++++++++-
2 files changed, 107 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 1e4801e..5dd1fc4 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3789,6 +3789,7 @@ static const char readme_msg[] =
" hist trigger\t- If set, event hits are aggregated into a hash table\n"
"\t Format: hist:keys=<field1>[,field2,...]\n"
"\t [:values=<field1[,field2,...]]\n"
+ "\t [:sort=field1,field2,...]\n"
"\t [:size=#entries]\n"
"\t [if <filter>]\n\n"
"\t When a matching event is hit, an entry is added to a hash\n"
@@ -3801,7 +3802,8 @@ static const char readme_msg[] =
"\t If 'values' is not specified, 'hitcount' will be assumed.\n"
"\t of event hits. Keys can be any field. Compound keys\n"
"\t consisting of up to two fields can be specified by the 'keys'\n"
- "\t keyword.\n\n"
+ "\t keyword. Additionally, sort keys consisting of up to two\n"
+ "\t fields can be specified by the 'sort' keyword.\n\n"
"\t Reading the 'hist' file for the event will dump the hash\n"
"\t table in its entirety to stdout. Each printed hash table\n"
"\t entry is a simple list of the keys and values comprising the\n"
@@ -3815,8 +3817,10 @@ static const char readme_msg[] =
"\t of 'drops', the number of hits that were ignored. The size\n"
"\t should be a power of 2 between 128 and 131072 (any non-\n"
"\t power-of-2 number specified will be rounded up).\n\n"
- "\t The entries are sorted by 'hitcount' and the sort order is\n"
- "\t 'ascending'.\n\n"
+ "\t The 'sort' param can be used to specify a value field to sort\n"
+ "\t on. The default if unspecified is 'hitcount' and the.\n"
+ "\t default sort order is 'ascending'. To sort in the opposite\n"
+ "\t direction, append .descending' to the sort key.\n\n"
#endif
;

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 3d5433a..6bf224f 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -77,6 +77,7 @@ enum hist_field_flags {
struct hist_trigger_attrs {
char *keys_str;
char *vals_str;
+ char *sort_key_str;
unsigned int map_bits;
};

@@ -156,6 +157,7 @@ static int parse_map_size(char *str)

static void destroy_hist_trigger_attrs(struct hist_trigger_attrs *attrs)
{
+ kfree(attrs->sort_key_str);
kfree(attrs->keys_str);
kfree(attrs->vals_str);
kfree(attrs);
@@ -180,6 +182,8 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
!strncmp(str, "vals", strlen("vals")) ||
!strncmp(str, "val", strlen("val")))
attrs->vals_str = kstrdup(str, GFP_KERNEL);
+ else if (!strncmp(str, "sort", strlen("sort")))
+ attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
else if (!strncmp(str, "size", strlen("size"))) {
int map_bits = parse_map_size(str);

@@ -420,12 +424,88 @@ static int create_hist_fields(struct hist_trigger_data *hist_data,
return ret;
}

+static int is_descending(const char *str)
+{
+ if (!str)
+ return 0;
+
+ if (!strcmp(str, "descending"))
+ return 1;
+
+ if (!strcmp(str, "ascending"))
+ return 0;
+
+ return -EINVAL;
+}
+
static int create_sort_keys(struct hist_trigger_data *hist_data)
{
+ char *fields_str = hist_data->attrs->sort_key_str;
+ struct ftrace_event_field *field = NULL;
+ struct tracing_map_sort_key *sort_key;
+ unsigned int i, j;
int ret = 0;

hist_data->n_sort_keys = 1; /* sort_keys[0] is always hitcount */

+ if (!fields_str)
+ goto out;
+
+ strsep(&fields_str, "=");
+ if (!fields_str) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ for (i = 0; i < TRACING_MAP_SORT_KEYS_MAX; i++) {
+ char *field_str, *field_name;
+
+ sort_key = &hist_data->sort_keys[i];
+
+ field_str = strsep(&fields_str, ",");
+ if (!field_str) {
+ if (i == 0)
+ ret = -EINVAL;
+ break;
+ }
+
+ if ((i == TRACING_MAP_SORT_KEYS_MAX - 1) && fields_str) {
+ ret = -EINVAL;
+ break;
+ }
+
+ field_name = strsep(&field_str, ".");
+ if (!field_name) {
+ ret = -EINVAL;
+ break;
+ }
+
+ if (!strcmp(field_name, "hitcount")) {
+ ret = is_descending(field_str);
+ if (ret < 0)
+ break;
+ sort_key->descending = ret;
+ continue;
+ }
+
+ for (j = 1; j < hist_data->n_fields; j++) {
+ field = hist_data->fields[j]->field;
+ if (field && !strcmp(field_name, field->name)) {
+ sort_key->field_idx = j;
+ ret = is_descending(field_str);
+ if (ret < 0)
+ goto out;
+ sort_key->descending = ret;
+ break;
+ }
+ }
+ if (j == hist_data->n_fields) {
+ ret = -EINVAL;
+ break;
+ }
+ }
+ hist_data->n_sort_keys = i;
+ out:
return ret;
}

@@ -737,7 +817,26 @@ static int event_hist_trigger_print(struct seq_file *m,
}

seq_puts(m, ":sort=");
- seq_puts(m, "hitcount");
+
+ for (i = 0; i < hist_data->n_sort_keys; i++) {
+ struct tracing_map_sort_key *sort_key;
+
+ sort_key = &hist_data->sort_keys[i];
+
+ if (i > 0)
+ seq_puts(m, ",");
+
+ if (sort_key->field_idx == HITCOUNT_IDX)
+ seq_puts(m, "hitcount");
+ else {
+ unsigned int idx = sort_key->field_idx;
+
+ hist_field_print(m, hist_data->fields[idx]);
+ }
+
+ if (sort_key->descending)
+ seq_puts(m, ".descending");
+ }

seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));

--
1.9.3

2015-07-16 17:23:46

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 12/22] tracing: Add hist trigger support for pausing and continuing a trace

Allow users to append 'pause' or 'continue' to an existing trigger in
order to have it paused or to have a paused trace continue.

This expands the hist trigger syntax from this:
# echo hist:keys=xxx:vals=yyy:sort=zzz.descending \
[ if filter] > event/trigger

to this:

# echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause or cont \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 5 +++++
kernel/trace/trace_events_hist.c | 26 +++++++++++++++++++++++---
2 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 5dd1fc4..547bbc8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3791,6 +3791,7 @@ static const char readme_msg[] =
"\t [:values=<field1[,field2,...]]\n"
"\t [:sort=field1,field2,...]\n"
"\t [:size=#entries]\n"
+ "\t [:pause][:continue]\n"
"\t [if <filter>]\n\n"
"\t When a matching event is hit, an entry is added to a hash\n"
"\t table using the key(s) and value(s) named. Keys and values\n"
@@ -3821,6 +3822,10 @@ static const char readme_msg[] =
"\t on. The default if unspecified is 'hitcount' and the.\n"
"\t default sort order is 'ascending'. To sort in the opposite\n"
"\t direction, append .descending' to the sort key.\n\n"
+ "\t The 'pause' param can be used to pause an existing hist\n"
+ "\t trigger or to start a hist trigger but not log any events\n"
+ "\t until told to do so. 'continue' can be used to start or\n"
+ "\t restart a paused hist trigger.\n\n"
#endif
;

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 6bf224f..3ae58e7 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -78,6 +78,8 @@ struct hist_trigger_attrs {
char *keys_str;
char *vals_str;
char *sort_key_str;
+ bool pause;
+ bool cont;
unsigned int map_bits;
};

@@ -184,6 +186,11 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
attrs->vals_str = kstrdup(str, GFP_KERNEL);
else if (!strncmp(str, "sort", strlen("sort")))
attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
+ else if (!strncmp(str, "pause", strlen("pause")))
+ attrs->pause = true;
+ else if (!strncmp(str, "continue", strlen("continue")) ||
+ !strncmp(str, "cont", strlen("cont")))
+ attrs->cont = true;
else if (!strncmp(str, "size", strlen("size"))) {
int map_bits = parse_map_size(str);

@@ -843,7 +850,10 @@ static int event_hist_trigger_print(struct seq_file *m,
if (data->filter_str)
seq_printf(m, " if %s", data->filter_str);

- seq_puts(m, " [active]");
+ if (data->paused)
+ seq_puts(m, " [paused]");
+ else
+ seq_puts(m, " [active]");

seq_putc(m, '\n');

@@ -882,16 +892,25 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
struct event_trigger_data *data,
struct trace_event_file *file)
{
+ struct hist_trigger_data *hist_data = data->private_data;
struct event_trigger_data *test;
int ret = 0;

list_for_each_entry_rcu(test, &file->triggers, list) {
if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
- ret = -EEXIST;
+ if (hist_data->attrs->pause)
+ test->paused = true;
+ else if (hist_data->attrs->cont)
+ test->paused = false;
+ else
+ ret = -EEXIST;
goto out;
}
}

+ if (hist_data->attrs->pause)
+ data->paused = true;
+
if (data->ops->init) {
ret = data->ops->init(data->ops, data);
if (ret < 0)
@@ -984,7 +1003,8 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
* triggers registered a failure too.
*/
if (!ret) {
- ret = -ENOENT;
+ if (!(attrs->pause || attrs->cont))
+ ret = -ENOENT;
goto out_free;
} else if (ret < 0)
goto out_free;
--
1.9.3

2015-07-16 17:23:50

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 13/22] tracing: Add hist trigger support for clearing a trace

Allow users to append 'clear' to an existing trigger in order to have
the hash table cleared.

This expands the hist trigger syntax from this:
# echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause/cont \
[ if filter] > event/trigger

to this:

# echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause/cont/clear \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 4 +++-
kernel/trace/trace_events_hist.c | 25 ++++++++++++++++++++++++-
2 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 547bbc8..27daa28 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3791,7 +3791,7 @@ static const char readme_msg[] =
"\t [:values=<field1[,field2,...]]\n"
"\t [:sort=field1,field2,...]\n"
"\t [:size=#entries]\n"
- "\t [:pause][:continue]\n"
+ "\t [:pause][:continue][:clear]\n"
"\t [if <filter>]\n\n"
"\t When a matching event is hit, an entry is added to a hash\n"
"\t table using the key(s) and value(s) named. Keys and values\n"
@@ -3826,6 +3826,8 @@ static const char readme_msg[] =
"\t trigger or to start a hist trigger but not log any events\n"
"\t until told to do so. 'continue' can be used to start or\n"
"\t restart a paused hist trigger.\n\n"
+ "\t The 'clear' param will clear the contents of a running hist\n"
+ "\t trigger and leave its current paused/active state.\n\n"
#endif
;

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 3ae58e7..d8259fe 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -80,6 +80,7 @@ struct hist_trigger_attrs {
char *sort_key_str;
bool pause;
bool cont;
+ bool clear;
unsigned int map_bits;
};

@@ -188,6 +189,8 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
else if (!strncmp(str, "pause", strlen("pause")))
attrs->pause = true;
+ else if (!strncmp(str, "clear", strlen("clear")))
+ attrs->clear = true;
else if (!strncmp(str, "continue", strlen("continue")) ||
!strncmp(str, "cont", strlen("cont")))
attrs->cont = true;
@@ -888,6 +891,24 @@ static struct event_trigger_ops *event_hist_get_trigger_ops(char *cmd,
return &event_hist_trigger_ops;
}

+static void hist_clear(struct event_trigger_data *data)
+{
+ struct hist_trigger_data *hist_data = data->private_data;
+ bool paused;
+
+ paused = data->paused;
+ data->paused = true;
+
+ synchronize_sched();
+
+ tracing_map_clear(hist_data->map);
+
+ atomic64_set(&hist_data->total_hits, 0);
+ atomic64_set(&hist_data->drops, 0);
+
+ data->paused = paused;
+}
+
static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
struct event_trigger_data *data,
struct trace_event_file *file)
@@ -902,6 +923,8 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
test->paused = true;
else if (hist_data->attrs->cont)
test->paused = false;
+ else if (hist_data->attrs->clear)
+ hist_clear(test);
else
ret = -EEXIST;
goto out;
@@ -1003,7 +1026,7 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
* triggers registered a failure too.
*/
if (!ret) {
- if (!(attrs->pause || attrs->cont))
+ if (!(attrs->pause || attrs->cont || attrs->clear))
ret = -ENOENT;
goto out_free;
} else if (ret < 0)
--
1.9.3

2015-07-16 17:23:53

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 14/22] tracing: Add hist trigger 'hex' modifier for displaying numeric fields

Allow users to have numeric fields displayed as hex values in the
output by appending '.hex' to field names:

# echo hist:keys=aaa,bbb.hex:vals=ccc.hex ... \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 5 +++-
kernel/trace/trace_events_hist.c | 49 +++++++++++++++++++++++++++++++++++++---
2 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 27daa28..14f9472 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3810,7 +3810,10 @@ static const char readme_msg[] =
"\t entry is a simple list of the keys and values comprising the\n"
"\t entry; keys are printed first and are delineated by curly\n"
"\t braces, and are followed by the set of value fields for the\n"
- "\t entry. Numeric fields are displayed as base-10 integers.\n"
+ "\t entry. By default, numeric fields are displayed as base-10\n"
+ "\t integers. This can be modified by appending any of the\n"
+ "\t following modifiers to the field name:\n\n"
+ "\t .hex display a number as a hex value\n\n"
"\t By default, the size of the hash table is 2048 entries. The\n"
"\t 'size' param can be used to specify more or fewer than that.\n"
"\t The units are in terms of hashtable entries - if a run uses\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index d8259fe..9cc38ee 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -72,6 +72,7 @@ enum hist_field_flags {
HIST_FIELD_HITCOUNT = 1,
HIST_FIELD_KEY = 2,
HIST_FIELD_STRING = 4,
+ HIST_FIELD_HEX = 8,
};

struct hist_trigger_attrs {
@@ -284,9 +285,20 @@ static int create_val_field(struct hist_trigger_data *hist_data,
{
struct ftrace_event_field *field = NULL;
unsigned long flags = 0;
+ char *field_name;
int ret = 0;

- field = trace_find_event_field(file->event_call, field_str);
+ field_name = strsep(&field_str, ".");
+ if (field_str) {
+ if (!strcmp(field_str, "hex"))
+ flags |= HIST_FIELD_HEX;
+ else {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+
+ field = trace_find_event_field(file->event_call, field_name);
if (!field) {
ret = -EINVAL;
goto out;
@@ -349,11 +361,22 @@ static int create_key_field(struct hist_trigger_data *hist_data,
struct ftrace_event_field *field = NULL;
unsigned long flags = 0;
unsigned int key_size;
+ char *field_name;
int ret = 0;

flags |= HIST_FIELD_KEY;

- field = trace_find_event_field(file->event_call, field_str);
+ field_name = strsep(&field_str, ".");
+ if (field_str) {
+ if (!strcmp(field_str, "hex"))
+ flags |= HIST_FIELD_HEX;
+ else {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+
+ field = trace_find_event_field(file->event_call, field_name);
if (!field) {
ret = -EINVAL;
goto out;
@@ -688,7 +711,11 @@ hist_trigger_entry_print(struct seq_file *m,
if (i > hist_data->n_vals)
seq_puts(m, ", ");

- if (key_field->flags & HIST_FIELD_STRING) {
+ if (key_field->flags & HIST_FIELD_HEX) {
+ uval = *(u64 *)(key + key_field->offset);
+ seq_printf(m, "%s: %llx",
+ key_field->field->name, uval);
+ } else if (key_field->flags & HIST_FIELD_STRING) {
seq_printf(m, "%s: %-35s", key_field->field->name,
(char *)(key + key_field->offset));
} else {
@@ -791,9 +818,25 @@ const struct file_operations event_hist_fops = {
.release = single_release,
};

+static const char *get_hist_field_flags(struct hist_field *hist_field)
+{
+ const char *flags_str = NULL;
+
+ if (hist_field->flags & HIST_FIELD_HEX)
+ flags_str = "hex";
+
+ return flags_str;
+}
+
static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
{
seq_printf(m, "%s", hist_field->field->name);
+ if (hist_field->flags) {
+ const char *flags_str = get_hist_field_flags(hist_field);
+
+ if (flags_str)
+ seq_printf(m, ".%s", flags_str);
+ }
}

static int event_hist_trigger_print(struct seq_file *m,
--
1.9.3

2015-07-16 17:23:55

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 15/22] tracing: Add hist trigger 'sym' and 'sym-offset' modifiers

Allow users to have address fields displayed as symbols in the output
by appending '.sym' or 'sym-offset' to field names:

# echo hist:keys=aaa.sym,bbb.sym-offset ... \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 2 ++
kernel/trace/trace_events_hist.c | 21 +++++++++++++++++++++
2 files changed, 23 insertions(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 14f9472..8cdc7b3 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3814,6 +3814,8 @@ static const char readme_msg[] =
"\t integers. This can be modified by appending any of the\n"
"\t following modifiers to the field name:\n\n"
"\t .hex display a number as a hex value\n\n"
+ "\t .sym display an address as a symbol\n"
+ "\t .sym-offset display an address as a symbol and offset\n"
"\t By default, the size of the hash table is 2048 entries. The\n"
"\t 'size' param can be used to specify more or fewer than that.\n"
"\t The units are in terms of hashtable entries - if a run uses\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 9cc38ee..106d557 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -73,6 +73,8 @@ enum hist_field_flags {
HIST_FIELD_KEY = 2,
HIST_FIELD_STRING = 4,
HIST_FIELD_HEX = 8,
+ HIST_FIELD_SYM = 16,
+ HIST_FIELD_SYM_OFFSET = 32,
};

struct hist_trigger_attrs {
@@ -370,6 +372,10 @@ static int create_key_field(struct hist_trigger_data *hist_data,
if (field_str) {
if (!strcmp(field_str, "hex"))
flags |= HIST_FIELD_HEX;
+ else if (!strcmp(field_str, "sym"))
+ flags |= HIST_FIELD_SYM;
+ else if (!strcmp(field_str, "sym-offset"))
+ flags |= HIST_FIELD_SYM_OFFSET;
else {
ret = -EINVAL;
goto out;
@@ -700,6 +706,7 @@ hist_trigger_entry_print(struct seq_file *m,
struct tracing_map_elt *elt)
{
struct hist_field *key_field;
+ char str[KSYM_SYMBOL_LEN];
unsigned int i;
u64 uval;

@@ -715,6 +722,16 @@ hist_trigger_entry_print(struct seq_file *m,
uval = *(u64 *)(key + key_field->offset);
seq_printf(m, "%s: %llx",
key_field->field->name, uval);
+ } else if (key_field->flags & HIST_FIELD_SYM) {
+ uval = *(u64 *)(key + key_field->offset);
+ sprint_symbol_no_offset(str, uval);
+ seq_printf(m, "%s: [%llx] %-45s",
+ key_field->field->name, uval, str);
+ } else if (key_field->flags & HIST_FIELD_SYM_OFFSET) {
+ uval = *(u64 *)(key + key_field->offset);
+ sprint_symbol(str, uval);
+ seq_printf(m, "%s: [%llx] %-55s",
+ key_field->field->name, uval, str);
} else if (key_field->flags & HIST_FIELD_STRING) {
seq_printf(m, "%s: %-35s", key_field->field->name,
(char *)(key + key_field->offset));
@@ -824,6 +841,10 @@ static const char *get_hist_field_flags(struct hist_field *hist_field)

if (hist_field->flags & HIST_FIELD_HEX)
flags_str = "hex";
+ else if (hist_field->flags & HIST_FIELD_SYM)
+ flags_str = "sym";
+ else if (hist_field->flags & HIST_FIELD_SYM_OFFSET)
+ flags_str = "sym-offset";

return flags_str;
}
--
1.9.3

2015-07-16 17:24:12

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 16/22] tracing: Add hist trigger 'execname' modifier

Allow users to have pid fields displayed as program names in the output
by appending '.execname' to field names:

# echo hist:keys=aaa.execname ... \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 1 +
kernel/trace/trace_events_hist.c | 86 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 86 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8cdc7b3..a16ab69 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3816,6 +3816,7 @@ static const char readme_msg[] =
"\t .hex display a number as a hex value\n\n"
"\t .sym display an address as a symbol\n"
"\t .sym-offset display an address as a symbol and offset\n"
+ "\t .execname display a common_pid as a program name\n\n"
"\t By default, the size of the hash table is 2048 entries. The\n"
"\t 'size' param can be used to specify more or fewer than that.\n"
"\t The units are in terms of hashtable entries - if a run uses\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 106d557..af1b846 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -75,6 +75,7 @@ enum hist_field_flags {
HIST_FIELD_HEX = 8,
HIST_FIELD_SYM = 16,
HIST_FIELD_SYM_OFFSET = 32,
+ HIST_FIELD_EXECNAME = 64,
};

struct hist_trigger_attrs {
@@ -218,6 +219,78 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
return ERR_PTR(ret);
}

+static inline void save_comm(char *comm, struct task_struct *task)
+{
+ if (!task->pid) {
+ strcpy(comm, "<idle>");
+ return;
+ }
+
+ if (WARN_ON_ONCE(task->pid < 0)) {
+ strcpy(comm, "<XXX>");
+ return;
+ }
+
+ if (task->pid > PID_MAX_DEFAULT) {
+ strcpy(comm, "<...>");
+ return;
+ }
+
+ memcpy(comm, task->comm, TASK_COMM_LEN);
+}
+
+static void hist_trigger_elt_free(struct tracing_map_elt *elt)
+{
+ kfree((char *)elt->private_data);
+}
+
+static int hist_trigger_elt_alloc(struct tracing_map_elt *elt)
+{
+ struct hist_trigger_data *hist_data = elt->map->private_data;
+ struct hist_field *key_field;
+ unsigned int i;
+
+ for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
+ key_field = hist_data->fields[i];
+
+ if (key_field->flags & HIST_FIELD_EXECNAME) {
+ unsigned int size = TASK_COMM_LEN + 1;
+
+ elt->private_data = kzalloc(size, GFP_KERNEL);
+ if (!elt->private_data)
+ return -ENOMEM;
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static void hist_trigger_elt_copy(struct tracing_map_elt *to,
+ struct tracing_map_elt *from)
+{
+ char *comm_from = from->private_data;
+ char *comm_to = to->private_data;
+
+ if (comm_from)
+ memcpy(comm_to, comm_from, TASK_COMM_LEN + 1);
+}
+
+static void hist_trigger_elt_init(struct tracing_map_elt *elt)
+{
+ char *comm = elt->private_data;
+
+ if (comm)
+ save_comm(comm, current);
+}
+
+static struct tracing_map_ops hist_trigger_ops = {
+ .elt_alloc = hist_trigger_elt_alloc,
+ .elt_copy = hist_trigger_elt_copy,
+ .elt_free = hist_trigger_elt_free,
+ .elt_init = hist_trigger_elt_init,
+};
+
static void destroy_hist_field(struct hist_field *hist_field)
{
kfree(hist_field);
@@ -376,6 +449,9 @@ static int create_key_field(struct hist_trigger_data *hist_data,
flags |= HIST_FIELD_SYM;
else if (!strcmp(field_str, "sym-offset"))
flags |= HIST_FIELD_SYM_OFFSET;
+ else if (!strcmp(field_str, "execname") &&
+ !strcmp(field_name, "common_pid"))
+ flags |= HIST_FIELD_EXECNAME;
else {
ret = -EINVAL;
goto out;
@@ -612,7 +688,7 @@ create_hist_data(unsigned int map_bits,
goto free;

hist_data->map = tracing_map_create(map_bits, hist_data->key_size,
- NULL, hist_data);
+ &hist_trigger_ops, hist_data);
if (IS_ERR(hist_data->map)) {
ret = PTR_ERR(hist_data->map);
hist_data->map = NULL;
@@ -732,6 +808,12 @@ hist_trigger_entry_print(struct seq_file *m,
sprint_symbol(str, uval);
seq_printf(m, "%s: [%llx] %-55s",
key_field->field->name, uval, str);
+ } else if (key_field->flags & HIST_FIELD_EXECNAME) {
+ char *comm = elt->private_data;
+
+ uval = *(u64 *)(key + key_field->offset);
+ seq_printf(m, "%s: %-16s[%10llu]",
+ key_field->field->name, comm, uval);
} else if (key_field->flags & HIST_FIELD_STRING) {
seq_printf(m, "%s: %-35s", key_field->field->name,
(char *)(key + key_field->offset));
@@ -845,6 +927,8 @@ static const char *get_hist_field_flags(struct hist_field *hist_field)
flags_str = "sym";
else if (hist_field->flags & HIST_FIELD_SYM_OFFSET)
flags_str = "sym-offset";
+ else if (hist_field->flags & HIST_FIELD_EXECNAME)
+ flags_str = "execname";

return flags_str;
}
--
1.9.3

2015-07-16 17:24:00

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 17/22] tracing: Add hist trigger 'syscall' modifier

Allow users to have syscall id fields displayed as syscall names in
the output by appending '.syscall' to field names:

# echo hist:keys=aaa.syscall ... \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 1 +
kernel/trace/trace_events_hist.c | 15 +++++++++++++++
2 files changed, 16 insertions(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index a16ab69..75795e3 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3817,6 +3817,7 @@ static const char readme_msg[] =
"\t .sym display an address as a symbol\n"
"\t .sym-offset display an address as a symbol and offset\n"
"\t .execname display a common_pid as a program name\n\n"
+ "\t .syscall display a syscall id as a syscall name\n"
"\t By default, the size of the hash table is 2048 entries. The\n"
"\t 'size' param can be used to specify more or fewer than that.\n"
"\t The units are in terms of hashtable entries - if a run uses\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index af1b846..28ccaa1 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -76,6 +76,7 @@ enum hist_field_flags {
HIST_FIELD_SYM = 16,
HIST_FIELD_SYM_OFFSET = 32,
HIST_FIELD_EXECNAME = 64,
+ HIST_FIELD_SYSCALL = 128,
};

struct hist_trigger_attrs {
@@ -452,6 +453,8 @@ static int create_key_field(struct hist_trigger_data *hist_data,
else if (!strcmp(field_str, "execname") &&
!strcmp(field_name, "common_pid"))
flags |= HIST_FIELD_EXECNAME;
+ else if (!strcmp(field_str, "syscall"))
+ flags |= HIST_FIELD_SYSCALL;
else {
ret = -EINVAL;
goto out;
@@ -814,6 +817,16 @@ hist_trigger_entry_print(struct seq_file *m,
uval = *(u64 *)(key + key_field->offset);
seq_printf(m, "%s: %-16s[%10llu]",
key_field->field->name, comm, uval);
+ } else if (key_field->flags & HIST_FIELD_SYSCALL) {
+ const char *syscall_name;
+
+ uval = *(u64 *)(key + key_field->offset);
+ syscall_name = get_syscall_name(uval);
+ if (!syscall_name)
+ syscall_name = "unknown_syscall";
+
+ seq_printf(m, "%s: %-30s[%3llu]",
+ key_field->field->name, syscall_name, uval);
} else if (key_field->flags & HIST_FIELD_STRING) {
seq_printf(m, "%s: %-35s", key_field->field->name,
(char *)(key + key_field->offset));
@@ -929,6 +942,8 @@ static const char *get_hist_field_flags(struct hist_field *hist_field)
flags_str = "sym-offset";
else if (hist_field->flags & HIST_FIELD_EXECNAME)
flags_str = "execname";
+ else if (hist_field->flags & HIST_FIELD_SYSCALL)
+ flags_str = "syscall";

return flags_str;
}
--
1.9.3

2015-07-16 17:24:04

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 18/22] tracing: Add hist trigger support for stacktraces as keys

It's often useful to be able to use a stacktrace as a hash key, for
keeping a count of the number of times a particular call path resulted
in a trace event, for instance. Add a special key named 'stacktrace'
which can be used as key in a 'keys=' param for this purpose:

# echo hist:keys=stacktrace ... \
[ if filter] > event/trigger

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.c | 9 +--
kernel/trace/trace_events_hist.c | 135 +++++++++++++++++++++++++++++----------
2 files changed, 106 insertions(+), 38 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 75795e3..16c64a2 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3801,10 +3801,11 @@ static const char readme_msg[] =
"\t The special string 'hitcount' can be used in place of an\n"
"\t explicit value field - this is simply a count of event hits.\n"
"\t If 'values' is not specified, 'hitcount' will be assumed.\n"
- "\t of event hits. Keys can be any field. Compound keys\n"
- "\t consisting of up to two fields can be specified by the 'keys'\n"
- "\t keyword. Additionally, sort keys consisting of up to two\n"
- "\t fields can be specified by the 'sort' keyword.\n\n"
+ "\t Keys can be any field, or the special string 'stacktrace',\n"
+ "\t which will use the event's kernel stacktrace as the key.\n"
+ "\t Compound keys consisting of up to two fields can be specified\n"
+ "\t by the 'keys' keyword. Additionally, sort keys consisting\n"
+ "\t of up to two fields can be specified by the 'sort' keyword.\n\n"
"\t Reading the 'hist' file for the event will dump the hash\n"
"\t table in its entirety to stdout. Each printed hash table\n"
"\t entry is a simple list of the keys and values comprising the\n"
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 28ccaa1..21ab8be 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -35,6 +35,11 @@ struct hist_field {
unsigned int offset;
};

+static u64 hist_field_none(struct hist_field *field, void *event)
+{
+ return 0;
+}
+
static u64 hist_field_counter(struct hist_field *field, void *event)
{
return 1;
@@ -64,9 +69,13 @@ DEFINE_HIST_FIELD_FN(u16);
DEFINE_HIST_FIELD_FN(s8);
DEFINE_HIST_FIELD_FN(u8);

+#define HIST_STACKTRACE_DEPTH 16
+#define HIST_STACKTRACE_SIZE (HIST_STACKTRACE_DEPTH * sizeof(unsigned long))
+#define HIST_STACKTRACE_SKIP 5
+
#define HITCOUNT_IDX 0
#define HIST_KEY_MAX 2
-#define HIST_KEY_SIZE_MAX (MAX_FILTER_STR_VAL + sizeof(u64))
+#define HIST_KEY_SIZE_MAX (MAX_FILTER_STR_VAL + HIST_STACKTRACE_SIZE)

enum hist_field_flags {
HIST_FIELD_HITCOUNT = 1,
@@ -77,6 +86,7 @@ enum hist_field_flags {
HIST_FIELD_SYM_OFFSET = 32,
HIST_FIELD_EXECNAME = 64,
HIST_FIELD_SYSCALL = 128,
+ HIST_FIELD_STACKTRACE = 256,
};

struct hist_trigger_attrs {
@@ -314,6 +324,11 @@ static struct hist_field *create_hist_field(struct ftrace_event_field *field,
goto out;
}

+ if (flags & HIST_FIELD_STACKTRACE) {
+ hist_field->fn = hist_field_none;
+ goto out;
+ }
+
if (is_string_field(field)) {
flags |= HIST_FIELD_STRING;
hist_field->fn = hist_field_string;
@@ -437,38 +452,43 @@ static int create_key_field(struct hist_trigger_data *hist_data,
struct ftrace_event_field *field = NULL;
unsigned long flags = 0;
unsigned int key_size;
- char *field_name;
int ret = 0;

flags |= HIST_FIELD_KEY;

- field_name = strsep(&field_str, ".");
- if (field_str) {
- if (!strcmp(field_str, "hex"))
- flags |= HIST_FIELD_HEX;
- else if (!strcmp(field_str, "sym"))
- flags |= HIST_FIELD_SYM;
- else if (!strcmp(field_str, "sym-offset"))
- flags |= HIST_FIELD_SYM_OFFSET;
- else if (!strcmp(field_str, "execname") &&
- !strcmp(field_name, "common_pid"))
- flags |= HIST_FIELD_EXECNAME;
- else if (!strcmp(field_str, "syscall"))
- flags |= HIST_FIELD_SYSCALL;
- else {
+ if (!strcmp(field_str, "stacktrace")) {
+ flags |= HIST_FIELD_STACKTRACE;
+ key_size = sizeof(unsigned long) * HIST_STACKTRACE_DEPTH;
+ } else {
+ char *field_name = strsep(&field_str, ".");
+
+ if (field_str) {
+ if (!strcmp(field_str, "hex"))
+ flags |= HIST_FIELD_HEX;
+ else if (!strcmp(field_str, "sym"))
+ flags |= HIST_FIELD_SYM;
+ else if (!strcmp(field_str, "sym-offset"))
+ flags |= HIST_FIELD_SYM_OFFSET;
+ else if (!strcmp(field_str, "execname") &&
+ !strcmp(field_name, "common_pid"))
+ flags |= HIST_FIELD_EXECNAME;
+ else if (!strcmp(field_str, "syscall"))
+ flags |= HIST_FIELD_SYSCALL;
+ else {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+
+ field = trace_find_event_field(file->event_call, field_name);
+ if (!field) {
ret = -EINVAL;
goto out;
}
- }

- field = trace_find_event_field(file->event_call, field_name);
- if (!field) {
- ret = -EINVAL;
- goto out;
+ key_size = field->size;
}

- key_size = field->size;
-
hist_data->fields[key_idx] = create_hist_field(field, flags);
if (!hist_data->fields[key_idx]) {
ret = -ENOMEM;
@@ -649,7 +669,9 @@ static int create_tracing_map_fields(struct hist_trigger_data *hist_data)

field = hist_field->field;

- if (is_string_field(field))
+ if (hist_field->flags & HIST_FIELD_STACKTRACE)
+ cmp_fn = tracing_map_cmp_none;
+ else if (is_string_field(field))
cmp_fn = tracing_map_cmp_string;
else
cmp_fn = tracing_map_cmp_num(field->size,
@@ -737,7 +759,9 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
static void event_hist_trigger(struct event_trigger_data *data, void *rec)
{
struct hist_trigger_data *hist_data = data->private_data;
+ unsigned long entries[HIST_STACKTRACE_DEPTH];
char compound_key[HIST_KEY_SIZE_MAX];
+ struct stack_trace stacktrace;
struct hist_field *key_field;
struct tracing_map_elt *elt;
u64 field_contents;
@@ -755,15 +779,27 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
key_field = hist_data->fields[i];

- field_contents = key_field->fn(key_field, rec);
- if (key_field->flags & HIST_FIELD_STRING)
- key = (void *)field_contents;
- else
- key = (void *)&field_contents;
+ if (key_field->flags & HIST_FIELD_STACKTRACE) {
+ stacktrace.max_entries = HIST_STACKTRACE_DEPTH;
+ stacktrace.entries = entries;
+ stacktrace.nr_entries = 0;
+ stacktrace.skip = HIST_STACKTRACE_SKIP;

- if (hist_data->n_keys > 1) {
- memcpy(compound_key + key_field->offset, key,
- key_field->size);
+ memset(stacktrace.entries, 0, HIST_STACKTRACE_SIZE);
+ save_stack_trace(&stacktrace);
+
+ key = entries;
+ } else {
+ field_contents = key_field->fn(key_field, rec);
+ if (key_field->flags & HIST_FIELD_STRING)
+ key = (void *)field_contents;
+ else
+ key = (void *)&field_contents;
+
+ if (hist_data->n_keys > 1) {
+ memcpy(compound_key + key_field->offset, key,
+ key_field->size);
+ }
}
}

@@ -779,6 +815,24 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
atomic64_inc(&hist_data->total_hits);
}

+static void hist_trigger_stacktrace_print(struct seq_file *m,
+ unsigned long *stacktrace_entries,
+ unsigned int max_entries)
+{
+ char str[KSYM_SYMBOL_LEN];
+ unsigned int spaces = 8;
+ unsigned int i;
+
+ for (i = 0; i < max_entries; i++) {
+ if (stacktrace_entries[i] == ULONG_MAX)
+ return;
+
+ seq_printf(m, "%*c", 1 + spaces, ' ');
+ sprint_symbol(str, stacktrace_entries[i]);
+ seq_printf(m, "%s\n", str);
+ }
+}
+
static void
hist_trigger_entry_print(struct seq_file *m,
struct hist_trigger_data *hist_data, void *key,
@@ -786,6 +840,7 @@ hist_trigger_entry_print(struct seq_file *m,
{
struct hist_field *key_field;
char str[KSYM_SYMBOL_LEN];
+ bool multiline = false;
unsigned int i;
u64 uval;

@@ -827,6 +882,12 @@ hist_trigger_entry_print(struct seq_file *m,

seq_printf(m, "%s: %-30s[%3llu]",
key_field->field->name, syscall_name, uval);
+ } else if (key_field->flags & HIST_FIELD_STACKTRACE) {
+ seq_puts(m, "stacktrace:\n");
+ hist_trigger_stacktrace_print(m,
+ key + key_field->offset,
+ HIST_STACKTRACE_DEPTH);
+ multiline = true;
} else if (key_field->flags & HIST_FIELD_STRING) {
seq_printf(m, "%s: %-35s", key_field->field->name,
(char *)(key + key_field->offset));
@@ -837,7 +898,10 @@ hist_trigger_entry_print(struct seq_file *m,
}
}

- seq_puts(m, " }");
+ if (!multiline)
+ seq_puts(m, " ");
+
+ seq_puts(m, "}");

seq_printf(m, " hitcount: %10llu",
tracing_map_read_sum(elt, HITCOUNT_IDX));
@@ -975,7 +1039,10 @@ static int event_hist_trigger_print(struct seq_file *m,
if (i > hist_data->n_vals)
seq_puts(m, ",");

- hist_field_print(m, key_field);
+ if (key_field->flags & HIST_FIELD_STACKTRACE)
+ seq_puts(m, "stacktrace");
+ else
+ hist_field_print(m, key_field);
}

seq_puts(m, ":vals=");
--
1.9.3

2015-07-16 17:24:08

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 19/22] tracing: Support string type key properly

From: Namhyung Kim <[email protected]>

The string in a trace event is usually recorded as dynamic array which
is variable length. But current hist code only support fixed length
array so it cannot support most strings.

This patch fixes it by checking filter_type of the field and get
proper pointer with it. With this, it can get a histogram of exec()
based on filenames like below:

# cd /sys/kernel/tracing/events/sched/sched_process_exec
# cat 'hist:key=filename' > trigger
# ps
PID TTY TIME CMD
1 ? 00:00:00 init
29 ? 00:00:00 sh
38 ? 00:00:00 ps
# ls
enable filter format hist id trigger
# cat hist
# trigger info: hist:keys=filename:vals=hitcount:sort=hitcount:size=2048 [active]

{ filename: /usr/bin/ps } hitcount: 1
{ filename: /usr/bin/ls } hitcount: 1
{ filename: /usr/bin/cat } hitcount: 1

Totals:
Hits: 3
Entries: 3
Dropped: 0

Cc: Tom Zanussi <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Signed-off-by: Namhyung Kim <[email protected]>
---
kernel/trace/trace_events_hist.c | 47 +++++++++++++++++++++++++++++++++++++---
1 file changed, 44 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 21ab8be..67fffee 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -52,6 +52,22 @@ static u64 hist_field_string(struct hist_field *hist_field, void *event)
return (u64)addr;
}

+static u64 hist_field_dynstring(struct hist_field *hist_field, void *event)
+{
+ u32 str_item = *(u32 *)(event + hist_field->field->offset);
+ int str_loc = str_item & 0xffff;
+ char *addr = (char *)(event + str_loc);
+
+ return (u64)addr;
+}
+
+static u64 hist_field_pstring(struct hist_field *hist_field, void *event)
+{
+ char **addr = (char **)(event + hist_field->field->offset);
+
+ return (u64)*addr;
+}
+
#define DEFINE_HIST_FIELD_FN(type) \
static u64 hist_field_##type(struct hist_field *hist_field, void *event)\
{ \
@@ -331,7 +347,13 @@ static struct hist_field *create_hist_field(struct ftrace_event_field *field,

if (is_string_field(field)) {
flags |= HIST_FIELD_STRING;
- hist_field->fn = hist_field_string;
+
+ if (field->filter_type == FILTER_STATIC_STRING)
+ hist_field->fn = hist_field_string;
+ else if (field->filter_type == FILTER_DYN_STRING)
+ hist_field->fn = hist_field_dynstring;
+ else
+ hist_field->fn = hist_field_pstring;
} else {
hist_field->fn = select_value_fn(field->size,
field->is_signed);
@@ -486,7 +508,10 @@ static int create_key_field(struct hist_trigger_data *hist_data,
goto out;
}

- key_size = field->size;
+ if (is_string_field(field)) /* should be last key field */
+ key_size = HIST_KEY_SIZE_MAX - key_offset;
+ else
+ key_size = field->size;
}

hist_data->fields[key_idx] = create_hist_field(field, flags);
@@ -797,8 +822,24 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
key = (void *)&field_contents;

if (hist_data->n_keys > 1) {
+ /* ensure NULL-termination */
+ size_t size = key_field->size - 1;
+
+ if (key_field->flags & HIST_FIELD_STRING) {
+ struct ftrace_event_field *field;
+
+ field = key_field->field;
+ if (field->filter_type == FILTER_DYN_STRING)
+ size = *(u32 *)(rec + field->offset) >> 16;
+ else if (field->filter_type == FILTER_PTR_STRING)
+ size = strlen(key);
+
+ if (size > key_field->size - 1)
+ size = key_field->size - 1;
+ }
+
memcpy(compound_key + key_field->offset, key,
- key_field->size);
+ size);
}
}
}
--
1.9.3

2015-07-16 17:25:09

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 20/22] tracing: Remove restriction on string position in hist trigger keys

If we assume the maximum size for a string field, we don't have to
worry about its position. Since we only allow two keys in a compound
key and having more than one string key in a given compound key
doesn't make much sense anyway, trading a bit of extra space instead
of introducing an arbitrary restriction makes more sense.

We also need to use the event field size for static strings when
copying the contents, otherwise we get random garbage in the key.

Finally, rearrange the code without changing any functionality by
moving the compound key updating code into a separate function.

Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace_events_hist.c | 65 +++++++++++++++++++++++-----------------
1 file changed, 37 insertions(+), 28 deletions(-)

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 67fffee..4ba7645 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -508,8 +508,8 @@ static int create_key_field(struct hist_trigger_data *hist_data,
goto out;
}

- if (is_string_field(field)) /* should be last key field */
- key_size = HIST_KEY_SIZE_MAX - key_offset;
+ if (is_string_field(field))
+ key_size = MAX_FILTER_STR_VAL;
else
key_size = field->size;
}
@@ -781,9 +781,36 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
}
}

+static inline void add_to_key(char *compound_key, void *key,
+ struct hist_field *key_field, void *rec)
+{
+ size_t size = key_field->size;
+
+ if (key_field->flags & HIST_FIELD_STRING) {
+ struct ftrace_event_field *field;
+
+ /* ensure NULL-termination */
+ size--;
+
+ field = key_field->field;
+ if (field->filter_type == FILTER_DYN_STRING)
+ size = *(u32 *)(rec + field->offset) >> 16;
+ else if (field->filter_type == FILTER_PTR_STRING)
+ size = strlen(key);
+ else if (field->filter_type == FILTER_STATIC_STRING)
+ size = field->size;
+
+ if (size > key_field->size - 1)
+ size = key_field->size - 1;
+ }
+
+ memcpy(compound_key + key_field->offset, key, size);
+}
+
static void event_hist_trigger(struct event_trigger_data *data, void *rec)
{
struct hist_trigger_data *hist_data = data->private_data;
+ bool use_compound_key = (hist_data->n_keys > 1);
unsigned long entries[HIST_STACKTRACE_DEPTH];
char compound_key[HIST_KEY_SIZE_MAX];
struct stack_trace stacktrace;
@@ -798,8 +825,7 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
return;
}

- if (hist_data->n_keys > 1)
- memset(compound_key, 0, hist_data->key_size);
+ memset(compound_key, 0, hist_data->key_size);

for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
key_field = hist_data->fields[i];
@@ -816,35 +842,18 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
key = entries;
} else {
field_contents = key_field->fn(key_field, rec);
- if (key_field->flags & HIST_FIELD_STRING)
+ if (key_field->flags & HIST_FIELD_STRING) {
key = (void *)field_contents;
- else
+ use_compound_key = true;
+ } else
key = (void *)&field_contents;
-
- if (hist_data->n_keys > 1) {
- /* ensure NULL-termination */
- size_t size = key_field->size - 1;
-
- if (key_field->flags & HIST_FIELD_STRING) {
- struct ftrace_event_field *field;
-
- field = key_field->field;
- if (field->filter_type == FILTER_DYN_STRING)
- size = *(u32 *)(rec + field->offset) >> 16;
- else if (field->filter_type == FILTER_PTR_STRING)
- size = strlen(key);
-
- if (size > key_field->size - 1)
- size = key_field->size - 1;
- }
-
- memcpy(compound_key + key_field->offset, key,
- size);
- }
}
+
+ if (use_compound_key)
+ add_to_key(compound_key, key, key_field, rec);
}

- if (hist_data->n_keys > 1)
+ if (use_compound_key)
key = compound_key;

elt = tracing_map_insert(hist_data->map, key);
--
1.9.3

2015-07-16 17:24:46

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 21/22] tracing: Add enable_hist/disable_hist triggers

Similar to enable_event/disable_event triggers, these triggers enable
and disable the aggregation of events into maps rather than enabling
and disabling their writing into the trace buffer.

They can be used to automatically start and stop hist triggers based
on a matching filter condition.

If there's a paused hist trigger on system:event, the following would
start it when the filter condition was hit:

# echo enable_hist:system:event [ if filter] > event/trigger

And the following would disable a running system:event hist trigger:

# echo disable_hist:system:event [ if filter] > event/trigger

See Documentation/trace/events.txt for real examples.

Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/trace_events.h | 1 +
kernel/trace/trace.c | 11 ++++
kernel/trace/trace.h | 32 ++++++++++
kernel/trace/trace_events_hist.c | 115 ++++++++++++++++++++++++++++++++++++
kernel/trace/trace_events_trigger.c | 71 ++++++++++++----------
5 files changed, 199 insertions(+), 31 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 0faf48b..0f3ffdd 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -411,6 +411,7 @@ enum event_trigger_type {
ETT_STACKTRACE = (1 << 2),
ETT_EVENT_ENABLE = (1 << 3),
ETT_EVENT_HIST = (1 << 4),
+ ETT_HIST_ENABLE = (1 << 5),
};

extern int filter_match_preds(struct event_filter *filter, void *rec);
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 16c64a2..c581750 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3761,6 +3761,10 @@ static const char readme_msg[] =
"\t trigger: traceon, traceoff\n"
"\t enable_event:<system>:<event>\n"
"\t disable_event:<system>:<event>\n"
+#ifdef CONFIG_HIST_TRIGGERS
+ "\t enable_hist:<system>:<event>\n"
+ "\t disable_hist:<system>:<event>\n"
+#endif
#ifdef CONFIG_STACKTRACE
"\t\t stacktrace\n"
#endif
@@ -3836,6 +3840,13 @@ static const char readme_msg[] =
"\t restart a paused hist trigger.\n\n"
"\t The 'clear' param will clear the contents of a running hist\n"
"\t trigger and leave its current paused/active state.\n\n"
+ "\t The enable_hist and disable_hist triggers can be used to\n"
+ "\t have one event conditionally start and stop another event's\n"
+ "\t already-attached hist trigger. Any number of enable_hist\n"
+ "\t and disable_hist triggers can be attached to a given event,\n"
+ "\t allowing that event to kick off and stop aggregations on\n"
+ "\t a host of other events. See Documentation/trace/events.txt\n"
+ "\t for examples.\n"
#endif
;

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index e6cb781..5e2e3b0 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1102,8 +1102,10 @@ extern const struct file_operations event_hist_fops;

#ifdef CONFIG_HIST_TRIGGERS
extern int register_trigger_hist_cmd(void);
+extern int register_trigger_hist_enable_disable_cmds(void);
#else
static inline int register_trigger_hist_cmd(void) { return 0; }
+static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
#endif

extern int register_trigger_cmds(void);
@@ -1121,6 +1123,34 @@ struct event_trigger_data {
struct list_head list;
};

+/* Avoid typos */
+#define ENABLE_EVENT_STR "enable_event"
+#define DISABLE_EVENT_STR "disable_event"
+#define ENABLE_HIST_STR "enable_hist"
+#define DISABLE_HIST_STR "disable_hist"
+
+struct enable_trigger_data {
+ struct trace_event_file *file;
+ bool enable;
+ bool hist;
+};
+
+extern int event_enable_trigger_print(struct seq_file *m,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *data);
+extern void event_enable_trigger_free(struct event_trigger_ops *ops,
+ struct event_trigger_data *data);
+extern int event_enable_trigger_func(struct event_command *cmd_ops,
+ struct trace_event_file *file,
+ char *glob, char *cmd, char *param);
+extern int event_enable_register_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *data,
+ struct trace_event_file *file);
+extern void event_enable_unregister_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *test,
+ struct trace_event_file *file);
extern void trigger_data_free(struct event_trigger_data *data);
extern int event_trigger_init(struct event_trigger_ops *ops,
struct event_trigger_data *data);
@@ -1134,6 +1164,8 @@ extern int set_trigger_filter(char *filter_str,
struct event_trigger_data *trigger_data,
struct trace_event_file *file);
extern int register_event_command(struct event_command *cmd);
+extern int unregister_event_command(struct event_command *cmd);
+extern int register_trigger_hist_enable_disable_cmds(void);

/**
* struct event_trigger_ops - callbacks for trace event triggers
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 4ba7645..6a43611 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -1345,3 +1345,118 @@ __init int register_trigger_hist_cmd(void)

return ret;
}
+
+static void
+hist_enable_trigger(struct event_trigger_data *data, void *rec)
+{
+ struct enable_trigger_data *enable_data = data->private_data;
+ struct event_trigger_data *test;
+
+ list_for_each_entry_rcu(test, &enable_data->file->triggers, list) {
+ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
+ if (enable_data->enable)
+ test->paused = false;
+ else
+ test->paused = true;
+ break;
+ }
+ }
+}
+
+static void
+hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
+{
+ if (!data->count)
+ return;
+
+ if (data->count != -1)
+ (data->count)--;
+
+ hist_enable_trigger(data, rec);
+}
+
+static struct event_trigger_ops hist_enable_trigger_ops = {
+ .func = hist_enable_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops hist_enable_count_trigger_ops = {
+ .func = hist_enable_count_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops hist_disable_trigger_ops = {
+ .func = hist_enable_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops hist_disable_count_trigger_ops = {
+ .func = hist_enable_count_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops *
+hist_enable_get_trigger_ops(char *cmd, char *param)
+{
+ struct event_trigger_ops *ops;
+ bool enable;
+
+ enable = (strcmp(cmd, ENABLE_HIST_STR) == 0);
+
+ if (enable)
+ ops = param ? &hist_enable_count_trigger_ops :
+ &hist_enable_trigger_ops;
+ else
+ ops = param ? &hist_disable_count_trigger_ops :
+ &hist_disable_trigger_ops;
+
+ return ops;
+}
+
+static struct event_command trigger_hist_enable_cmd = {
+ .name = ENABLE_HIST_STR,
+ .trigger_type = ETT_HIST_ENABLE,
+ .func = event_enable_trigger_func,
+ .reg = event_enable_register_trigger,
+ .unreg = event_enable_unregister_trigger,
+ .get_trigger_ops = hist_enable_get_trigger_ops,
+ .set_filter = set_trigger_filter,
+};
+
+static struct event_command trigger_hist_disable_cmd = {
+ .name = DISABLE_HIST_STR,
+ .trigger_type = ETT_HIST_ENABLE,
+ .func = event_enable_trigger_func,
+ .reg = event_enable_register_trigger,
+ .unreg = event_enable_unregister_trigger,
+ .get_trigger_ops = hist_enable_get_trigger_ops,
+ .set_filter = set_trigger_filter,
+};
+
+static __init void unregister_trigger_hist_enable_disable_cmds(void)
+{
+ unregister_event_command(&trigger_hist_enable_cmd);
+ unregister_event_command(&trigger_hist_disable_cmd);
+}
+
+__init int register_trigger_hist_enable_disable_cmds(void)
+{
+ int ret;
+
+ ret = register_event_command(&trigger_hist_enable_cmd);
+ if (WARN_ON(ret < 0))
+ return ret;
+ ret = register_event_command(&trigger_hist_disable_cmd);
+ if (WARN_ON(ret < 0))
+ unregister_trigger_hist_enable_disable_cmds();
+
+ return ret;
+}
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index e80f30b..9490d8f 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -338,7 +338,7 @@ __init int register_event_command(struct event_command *cmd)
* Currently we only unregister event commands from __init, so mark
* this __init too.
*/
-static __init int unregister_event_command(struct event_command *cmd)
+__init int unregister_event_command(struct event_command *cmd)
{
struct event_command *p, *n;
int ret = -ENODEV;
@@ -1052,15 +1052,6 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
unregister_event_command(&trigger_traceoff_cmd);
}

-/* Avoid typos */
-#define ENABLE_EVENT_STR "enable_event"
-#define DISABLE_EVENT_STR "disable_event"
-
-struct enable_trigger_data {
- struct trace_event_file *file;
- bool enable;
-};
-
static void
event_enable_trigger(struct event_trigger_data *data, void *rec)
{
@@ -1090,14 +1081,16 @@ event_enable_count_trigger(struct event_trigger_data *data, void *rec)
event_enable_trigger(data, rec);
}

-static int
-event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
- struct event_trigger_data *data)
+int event_enable_trigger_print(struct seq_file *m,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *data)
{
struct enable_trigger_data *enable_data = data->private_data;

seq_printf(m, "%s:%s:%s",
- enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
+ enable_data->hist ?
+ (enable_data->enable ? ENABLE_HIST_STR : DISABLE_HIST_STR) :
+ (enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR),
enable_data->file->event_call->class->system,
trace_event_name(enable_data->file->event_call));

@@ -1114,9 +1107,8 @@ event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
return 0;
}

-static void
-event_enable_trigger_free(struct event_trigger_ops *ops,
- struct event_trigger_data *data)
+void event_enable_trigger_free(struct event_trigger_ops *ops,
+ struct event_trigger_data *data)
{
struct enable_trigger_data *enable_data = data->private_data;

@@ -1161,10 +1153,9 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
.free = event_enable_trigger_free,
};

-static int
-event_enable_trigger_func(struct event_command *cmd_ops,
- struct trace_event_file *file,
- char *glob, char *cmd, char *param)
+int event_enable_trigger_func(struct event_command *cmd_ops,
+ struct trace_event_file *file,
+ char *glob, char *cmd, char *param)
{
struct trace_event_file *event_enable_file;
struct enable_trigger_data *enable_data;
@@ -1173,6 +1164,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
struct trace_array *tr = file->tr;
const char *system;
const char *event;
+ bool hist = false;
char *trigger;
char *number;
bool enable;
@@ -1197,8 +1189,15 @@ event_enable_trigger_func(struct event_command *cmd_ops,
if (!event_enable_file)
goto out;

- enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
+#ifdef CONFIG_HIST_TRIGGERS
+ hist = ((strcmp(cmd, ENABLE_HIST_STR) == 0) ||
+ (strcmp(cmd, DISABLE_HIST_STR) == 0));

+ enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
+ (strcmp(cmd, ENABLE_HIST_STR) == 0));
+#else
+ enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
+#endif
trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);

ret = -ENOMEM;
@@ -1218,6 +1217,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
INIT_LIST_HEAD(&trigger_data->list);
RCU_INIT_POINTER(trigger_data->filter, NULL);

+ enable_data->hist = hist;
enable_data->enable = enable;
enable_data->file = event_enable_file;
trigger_data->private_data = enable_data;
@@ -1295,10 +1295,10 @@ event_enable_trigger_func(struct event_command *cmd_ops,
goto out;
}

-static int event_enable_register_trigger(char *glob,
- struct event_trigger_ops *ops,
- struct event_trigger_data *data,
- struct trace_event_file *file)
+int event_enable_register_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *data,
+ struct trace_event_file *file)
{
struct enable_trigger_data *enable_data = data->private_data;
struct enable_trigger_data *test_enable_data;
@@ -1308,6 +1308,8 @@ static int event_enable_register_trigger(char *glob,
list_for_each_entry_rcu(test, &file->triggers, list) {
test_enable_data = test->private_data;
if (test_enable_data &&
+ (test->cmd_ops->trigger_type ==
+ data->cmd_ops->trigger_type) &&
(test_enable_data->file == enable_data->file)) {
ret = -EEXIST;
goto out;
@@ -1333,10 +1335,10 @@ out:
return ret;
}

-static void event_enable_unregister_trigger(char *glob,
- struct event_trigger_ops *ops,
- struct event_trigger_data *test,
- struct trace_event_file *file)
+void event_enable_unregister_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ struct event_trigger_data *test,
+ struct trace_event_file *file)
{
struct enable_trigger_data *test_enable_data = test->private_data;
struct enable_trigger_data *enable_data;
@@ -1346,6 +1348,8 @@ static void event_enable_unregister_trigger(char *glob,
list_for_each_entry_rcu(data, &file->triggers, list) {
enable_data = data->private_data;
if (enable_data &&
+ (data->cmd_ops->trigger_type ==
+ test->cmd_ops->trigger_type) &&
(enable_data->file == test_enable_data->file)) {
unregistered = true;
list_del_rcu(&data->list);
@@ -1365,8 +1369,12 @@ event_enable_get_trigger_ops(char *cmd, char *param)
struct event_trigger_ops *ops;
bool enable;

+#ifdef CONFIG_HIST_TRIGGERS
+ enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
+ (strcmp(cmd, ENABLE_HIST_STR) == 0));
+#else
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
-
+#endif
if (enable)
ops = param ? &event_enable_count_trigger_ops :
&event_enable_trigger_ops;
@@ -1437,6 +1445,7 @@ __init int register_trigger_cmds(void)
register_trigger_snapshot_cmd();
register_trigger_stacktrace_cmd();
register_trigger_enable_disable_cmds();
+ register_trigger_hist_enable_disable_cmds();
register_trigger_hist_cmd();

return 0;
--
1.9.3

2015-07-16 17:24:19

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v9 22/22] tracing: Add 'hist' trigger Documentation

Add documentation and usage examples for 'hist' triggers.

Signed-off-by: Tom Zanussi <[email protected]>
---
Documentation/trace/events.txt | 1131 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 1131 insertions(+)

diff --git a/Documentation/trace/events.txt b/Documentation/trace/events.txt
index 75d25a1..fb55ac3 100644
--- a/Documentation/trace/events.txt
+++ b/Documentation/trace/events.txt
@@ -494,3 +494,1134 @@ The following commands are supported:

Note that there can be only one traceon or traceoff trigger per
triggering event.
+
+- hist
+
+ This command aggregates event hits into a hash table keyed on one or
+ more trace event format fields (or stacktrace) and a set of running
+ totals derived from one or more trace event format fields and/or
+ event counts (hitcount).
+
+ The format of a hist trigger is as follows:
+
+ hist:keys=<field1>[,field2,...][:values=<field1>[,field2,...]]
+ [:sort=field1,field2,...][:size=#entries][:pause][:continue]
+ [:clear] [if <filter>]
+
+ When a matching event is hit, an entry is added to a hash table
+ using the key(s) and value(s) named. Keys and values correspond to
+ fields in the event's format description. Values must correspond to
+ numeric fields - on an event hit, the value(s) will be added to a
+ sum kept for that field. The special string 'hitcount' can be used
+ in place of an explicit value field - this is simply a count of
+ event hits. If 'values' isn't specified, an implicit 'hitcount'
+ value will be automatically created and used as the only value.
+ Keys can be any field, or the special string 'stacktrace', which
+ will use the event's kernel stacktrace as the key. The keywords
+ 'keys' or 'key' can be used to specify keys, and the keywords
+ 'values', 'vals', or 'val' can be used to specify values. Compound
+ keys consisting of up to two fields can be specified by the 'keys'
+ keyword. Hashing a compound key produces a unique entry in the
+ table for each unique combination of component keys, and can be
+ useful for providing more fine-grained summaries of event data.
+ Additionally, sort keys consisting of up to two fields can be
+ specified by the 'sort' keyword. If more than one field is
+ specified, the result will be a 'sort within a sort': the first key
+ is taken to be the primary sort key and the second the secondary
+ key.
+
+ 'hist' triggers add a 'hist' file to each event's subdirectory.
+ Reading the 'hist' file for the event will dump the hash table in
+ its entirety to stdout. Each printed hash table entry is a simple
+ list of the keys and values comprising the entry; keys are printed
+ first and are delineated by curly braces, and are followed by the
+ set of value fields for the entry. By default, numeric fields are
+ displayed as base-10 integers. This can be modified by appending
+ any of the following modifiers to the field name:
+
+ .hex display a number as a hex value
+ .sym display an address as a symbol
+ .sym-offset display an address as a symbol and offset
+ .syscall display a syscall id as a system call name
+ .execname display a common_pid as a program name
+
+ A typical usage scenario would be the following to enable a hist
+ trigger, read its current contents, and then turn it off:
+
+ # echo 'hist:keys=skbaddr.hex:vals=len' > \
+ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+
+ # echo '!hist:keys=skbaddr.hex:vals=len' > \
+ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+
+ The trigger file itself can be read to show the details of the
+ currently attached hist trigger. This information is also displayed
+ at the top of the 'hist' file when read.
+
+ By default, the size of the hash table is 2048 entries. The 'size'
+ param can be used to specify more or fewer than that. The units are
+ in terms of hashtable entries - if a run uses more entries than
+ specified, the results will show the number of 'drops', the number
+ of hits that were ignored. The size should be a power of 2 between
+ 128 and 131072 (any non- power-of-2 number specified will be rounded
+ up).
+
+ The 'sort' param can be used to specify a value field to sort on.
+ The default if unspecified is 'hitcount' and the default sort order
+ is 'ascending'. To sort in the opposite direction, append
+ .descending' to the sort key.
+
+ The 'pause' param can be used to pause an existing hist trigger or
+ to start a hist trigger but not log any events until told to do so.
+ 'continue' or 'cont' can be used to start or restart a paused hist
+ trigger.
+
+ The 'clear' param will clear the contents of a running hist trigger
+ and leave its current paused/active state.
+
+- enable_hist/disable_hist
+
+ The enable_hist and disable_hist triggers can be used to have one
+ event conditionally start and stop another event's already-attached
+ hist trigger. Any number of enable_hist and disable_hist triggers
+ can be attached to a given event, allowing that event to kick off
+ and stop aggregations on a host of other events.
+
+ The format is very similar to the enable/disable_event triggers:
+
+ enable_hist:<system>:<event>[:count]
+ disable_hist:<system>:<event>[:count]
+
+ Instead of enabling or disabling the tracing of the target event
+ into the trace buffer as the enable/disable_event triggers do, the
+ enable/disable_hist triggers enable or disable the aggregation of
+ the target event into a hash table.
+
+ A typical usage scenario for the enable_hist/disable_hist triggers
+ would be to first set up a paused hist trigger on some event,
+ followed by an enable_hist/disable_hist pair that turns the hist
+ aggregation on and off when conditions of interest are hit:
+
+ # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+
+ # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+
+ The above sets up an initially paused hist trigger which is unpaused
+ and starts aggregating events when a given program is executed, and
+ which stops aggregating when the process exits and the hist trigger
+ is paused again.
+
+ The examples below provide a more concrete illustration of the
+ concepts and typical usage patterns discussed above.
+
+
+6.2 'hist' trigger examples
+---------------------------
+
+ The first set of examples creates aggregations using the kmalloc
+ event. The fields that can be used for the hist trigger are listed
+ in the kmalloc event's format file:
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/format
+ name: kmalloc
+ ID: 374
+ format:
+ field:unsigned short common_type; offset:0; size:2; signed:0;
+ field:unsigned char common_flags; offset:2; size:1; signed:0;
+ field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
+ field:int common_pid; offset:4; size:4; signed:1;
+
+ field:unsigned long call_site; offset:8; size:8; signed:0;
+ field:const void * ptr; offset:16; size:8; signed:0;
+ field:size_t bytes_req; offset:24; size:8; signed:0;
+ field:size_t bytes_alloc; offset:32; size:8; signed:0;
+ field:gfp_t gfp_flags; offset:40; size:4; signed:0;
+
+ We'll start by creating a hist trigger that generates a simple table
+ that lists the total number of bytes requested for each function in
+ the kernel that made one or more calls to kmalloc:
+
+ # echo 'hist:key=call_site:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ This tells the tracing system to create a 'hist' trigger using the
+ call_site field of the kmalloc event as the key for the table, which
+ just means that each unique call_site address will have an entry
+ created for it in the table. The 'val=bytes_req' parameter tells
+ the hist trigger that for each unique entry (call_site) in the
+ table, it should keep a running total of the number of bytes
+ requested by that call_site.
+
+ We'll let it run for awhile and then dump the contents of the 'hist'
+ file in the kmalloc event's subdirectory (for readability, a number
+ of entries have been omitted):
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ { call_site: 18446744072106379007 } hitcount: 1 bytes_req: 176
+ { call_site: 18446744071579557049 } hitcount: 1 bytes_req: 1024
+ { call_site: 18446744071580608289 } hitcount: 1 bytes_req: 16384
+ { call_site: 18446744071581827654 } hitcount: 1 bytes_req: 24
+ { call_site: 18446744071580700980 } hitcount: 1 bytes_req: 8
+ { call_site: 18446744071579359876 } hitcount: 1 bytes_req: 152
+ { call_site: 18446744071580795365 } hitcount: 3 bytes_req: 144
+ { call_site: 18446744071581303129 } hitcount: 3 bytes_req: 144
+ { call_site: 18446744071580713234 } hitcount: 4 bytes_req: 2560
+ { call_site: 18446744071580933750 } hitcount: 4 bytes_req: 736
+ .
+ .
+ .
+ { call_site: 18446744072106047046 } hitcount: 69 bytes_req: 5576
+ { call_site: 18446744071582116407 } hitcount: 73 bytes_req: 2336
+ { call_site: 18446744072106054684 } hitcount: 136 bytes_req: 140504
+ { call_site: 18446744072106224230 } hitcount: 136 bytes_req: 19584
+ { call_site: 18446744072106078074 } hitcount: 153 bytes_req: 2448
+ { call_site: 18446744072106062406 } hitcount: 153 bytes_req: 36720
+ { call_site: 18446744071582507929 } hitcount: 153 bytes_req: 37088
+ { call_site: 18446744072102520590 } hitcount: 273 bytes_req: 10920
+ { call_site: 18446744071582143559 } hitcount: 358 bytes_req: 716
+ { call_site: 18446744072106465852 } hitcount: 417 bytes_req: 56712
+ { call_site: 18446744072102523378 } hitcount: 485 bytes_req: 27160
+ { call_site: 18446744072099568646 } hitcount: 1676 bytes_req: 33520
+
+ Totals:
+ Hits: 4610
+ Entries: 45
+ Dropped: 0
+
+ The output displays a line for each entry, beginning with the key
+ specified in the trigger, followed by the value(s) also specified in
+ the trigger. At the beginning of the output is a line that displays
+ the trigger info, which can also be displayed by reading the
+ 'trigger' file:
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+ hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ At the end of the output are a few lines that display the overall
+ totals for the run. The 'Hits' field shows the total number of
+ times the event trigger was hit, the 'Entries' fields shows the
+ total number of used entries in the hash table, and the 'Dropped'
+ field shows the number of hits that were dropped because the number
+ of used entries for the run exceeded the maximum number of entries
+ allowed for the table (normally 0, but if not a hint that you may
+ want to increase the size of the table using the 'size' param).
+
+ Notice in the above output that there's an extra field, 'hitcount',
+ that wasn't specified in the trigger. Also notice that in the
+ trigger info a param,'sort=hitcount', which wasn't specified in the
+ trigger either. The reason is that every trigger implicitly keeps a
+ count of the total number of hits attributed to a given entry,
+ called the 'hitcount', and that in the absence of a user-specified
+ sort param, the hitcount is used as the default sort field.
+
+ The value 'hitcount' can be used in place of an explicit value in
+ the 'values' param if you don't really need to have any particular
+ field summed and are mainly interested in hit frequencies.
+
+ To turn the hist trigger off, simply call up the trigger in command
+ history and re-execute it with a '!' prepended:
+
+ # echo '!hist:key=call_site:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ Finally, notice that the call_site as displayed in the output above
+ isn't really very useful. It's an address, but normally addresses
+ are displayed in hex. To have a numeric field displayed as hex
+ values, simply append '.hex' to the field name in the trigger:
+
+ # echo 'hist:key=call_site.hex:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.hex:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ { call_site: ffffffffa026b291 } hitcount: 1 bytes_req: 433
+ { call_site: ffffffffa07186ff } hitcount: 1 bytes_req: 176
+ { call_site: ffffffff811ae721 } hitcount: 1 bytes_req: 16384
+ { call_site: ffffffff811c5134 } hitcount: 1 bytes_req: 8
+ { call_site: ffffffffa04a9ebb } hitcount: 1 bytes_req: 511
+ { call_site: ffffffff8122e0a6 } hitcount: 1 bytes_req: 12
+ { call_site: ffffffff8107da84 } hitcount: 1 bytes_req: 152
+ { call_site: ffffffff812d8246 } hitcount: 1 bytes_req: 24
+ { call_site: ffffffff811dc1e5 } hitcount: 3 bytes_req: 144
+ { call_site: ffffffffa02515e8 } hitcount: 3 bytes_req: 648
+ { call_site: ffffffff81258159 } hitcount: 3 bytes_req: 144
+ { call_site: ffffffff811c80f4 } hitcount: 4 bytes_req: 544
+ .
+ .
+ .
+ { call_site: ffffffffa06c7646 } hitcount: 106 bytes_req: 8024
+ { call_site: ffffffffa06cb246 } hitcount: 132 bytes_req: 31680
+ { call_site: ffffffffa06cef7a } hitcount: 132 bytes_req: 2112
+ { call_site: ffffffff8137e399 } hitcount: 132 bytes_req: 23232
+ { call_site: ffffffffa06c941c } hitcount: 185 bytes_req: 171360
+ { call_site: ffffffffa06f2a66 } hitcount: 185 bytes_req: 26640
+ { call_site: ffffffffa036a70e } hitcount: 265 bytes_req: 10600
+ { call_site: ffffffff81325447 } hitcount: 292 bytes_req: 584
+ { call_site: ffffffffa072da3c } hitcount: 446 bytes_req: 60656
+ { call_site: ffffffffa036b1f2 } hitcount: 526 bytes_req: 29456
+ { call_site: ffffffffa0099c06 } hitcount: 1780 bytes_req: 35600
+
+ Totals:
+ Hits: 4775
+ Entries: 46
+ Dropped: 0
+
+ Even that's only marginally more useful - while hex values do look
+ more like addresses, what users are typically more interested in
+ when looking at text addresses are the corresponding symbols
+ instead. To have an address displayed as symbolic value instead,
+ simply append '.sym' or '.sym-offset' to the field name in the
+ trigger:
+
+ # echo 'hist:key=call_site.sym:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ { call_site: [ffffffff810adcb9] syslog_print_all } hitcount: 1 bytes_req: 1024
+ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffff8154acbe] usb_alloc_urb } hitcount: 1 bytes_req: 192
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
+ { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff811febd5] fsnotify_alloc_group } hitcount: 2 bytes_req: 528
+ { call_site: [ffffffff81440f58] __tty_buffer_request_room } hitcount: 2 bytes_req: 2624
+ { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 2 bytes_req: 96
+ { call_site: [ffffffffa05e19af] ieee80211_start_tx_ba_session [mac80211] } hitcount: 2 bytes_req: 464
+ { call_site: [ffffffff81672406] tcp_get_metrics } hitcount: 2 bytes_req: 304
+ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff81089b05] sched_create_group } hitcount: 2 bytes_req: 1424
+ .
+ .
+ .
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1185 bytes_req: 123240
+ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 1185 bytes_req: 104280
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 1402 bytes_req: 190672
+ { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 1518 bytes_req: 146208
+ { call_site: [ffffffffa029070e] drm_vma_node_allow [drm] } hitcount: 1746 bytes_req: 69840
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 2021 bytes_req: 792312
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 2592 bytes_req: 145152
+ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2629 bytes_req: 378576
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2629 bytes_req: 3783248
+ { call_site: [ffffffff81325607] apparmor_file_alloc_security } hitcount: 5192 bytes_req: 10384
+ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 5529 bytes_req: 110584
+ { call_site: [ffffffff8131ebf7] aa_alloc_task_context } hitcount: 21943 bytes_req: 702176
+ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 55759 bytes_req: 5074265
+
+ Totals:
+ Hits: 109928
+ Entries: 71
+ Dropped: 0
+
+ Because the default sort key above is 'hitcount', the above shows a
+ the list of call_sites by increasing hitcount, so that at the bottom
+ we see the functions that made the most kmalloc calls during the
+ run. If instead we we wanted to see the top kmalloc callers in
+ terms of the number of bytes requested rather than the number of
+ calls, and we wanted the top caller to appear at the top, we can use
+ the 'sort' param, along with the 'descending' modifier:
+
+ # echo 'hist:key=call_site.sym:val=bytes_req:sort=bytes_req.descending' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
+
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2186 bytes_req: 3397464
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1790 bytes_req: 712176
+ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 8132 bytes_req: 513135
+ { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 106 bytes_req: 440128
+ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2186 bytes_req: 314784
+ { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 2174 bytes_req: 208992
+ { call_site: [ffffffff811ae8e1] __kmalloc } hitcount: 8 bytes_req: 131072
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 859 bytes_req: 116824
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 1834 bytes_req: 102704
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 972 bytes_req: 101088
+ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 972 bytes_req: 85536
+ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 3333 bytes_req: 66664
+ { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 209 bytes_req: 61632
+ .
+ .
+ .
+ { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff812d8406] copy_semundo } hitcount: 2 bytes_req: 48
+ { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 1 bytes_req: 48
+ { call_site: [ffffffffa027121a] drm_getmagic [drm] } hitcount: 1 bytes_req: 48
+ { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
+ { call_site: [ffffffff811c52f4] bprm_change_interp } hitcount: 2 bytes_req: 16
+ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
+
+ Totals:
+ Hits: 32133
+ Entries: 81
+ Dropped: 0
+
+ To display the offset and size information in addition to the symbol
+ name, just use 'sym-offset' instead:
+
+ # echo 'hist:key=call_site.sym-offset:val=bytes_req:sort=bytes_req.descending' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym-offset:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
+
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2+0x6c/0x2c0 [i915] } hitcount: 4569 bytes_req: 3163720
+ { call_site: [ffffffffa0489a66] intel_ring_begin+0xc6/0x1f0 [i915] } hitcount: 4569 bytes_req: 657936
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23+0x694/0x1020 [i915] } hitcount: 1519 bytes_req: 472936
+ { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23+0x516/0x1020 [i915] } hitcount: 3050 bytes_req: 211832
+ { call_site: [ffffffff811e2a1b] seq_buf_alloc+0x1b/0x50 } hitcount: 34 bytes_req: 148384
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip+0xbc/0x870 [i915] } hitcount: 1385 bytes_req: 144040
+ { call_site: [ffffffff811ae8e1] __kmalloc+0x191/0x1b0 } hitcount: 8 bytes_req: 131072
+ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl+0x282/0x360 [drm] } hitcount: 1385 bytes_req: 121880
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc+0x32/0x100 [drm] } hitcount: 1848 bytes_req: 103488
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state+0x2c/0xa0 [i915] } hitcount: 461 bytes_req: 62696
+ { call_site: [ffffffffa029070e] drm_vma_node_allow+0x2e/0xd0 [drm] } hitcount: 1541 bytes_req: 61640
+ { call_site: [ffffffff815f8d7b] sk_prot_alloc+0xcb/0x1b0 } hitcount: 57 bytes_req: 57456
+ .
+ .
+ .
+ { call_site: [ffffffff8109524a] alloc_fair_sched_group+0x5a/0x1a0 } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffffa027b921] drm_vm_open_locked+0x31/0xa0 [drm] } hitcount: 3 bytes_req: 96
+ { call_site: [ffffffff8122e266] proc_self_follow_link+0x76/0xb0 } hitcount: 8 bytes_req: 96
+ { call_site: [ffffffff81213e80] load_elf_binary+0x240/0x1650 } hitcount: 3 bytes_req: 84
+ { call_site: [ffffffff8154bc62] usb_control_msg+0x42/0x110 } hitcount: 1 bytes_req: 8
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report+0x7e/0x1a0 [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event+0x8a/0x120 [hid] } hitcount: 1 bytes_req: 7
+
+ Totals:
+ Hits: 26098
+ Entries: 64
+ Dropped: 0
+
+ We can also add multiple fields to the 'values' param. For example,
+ we might want to see the total number of bytes allocated alongside
+ bytes requested, and display the result sorted by bytes allocated in
+ a descending order:
+
+ # echo 'hist:keys=call_site.sym:values=bytes_req,bytes_alloc:sort=bytes_alloc.descending' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym:vals=bytes_req,bytes_alloc:sort=bytes_alloc.descending:size=2048 [active]
+
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 7403 bytes_req: 4084360 bytes_alloc: 5958016
+ { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 541 bytes_req: 2213968 bytes_alloc: 2228224
+ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 7404 bytes_req: 1066176 bytes_alloc: 1421568
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1565 bytes_req: 557368 bytes_alloc: 1037760
+ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 9557 bytes_req: 595778 bytes_alloc: 695744
+ { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 5839 bytes_req: 430680 bytes_alloc: 470400
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 2388 bytes_req: 324768 bytes_alloc: 458496
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 3911 bytes_req: 219016 bytes_alloc: 250304
+ { call_site: [ffffffff815f8d7b] sk_prot_alloc } hitcount: 235 bytes_req: 236880 bytes_alloc: 240640
+ { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 557 bytes_req: 169024 bytes_alloc: 221760
+ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 9378 bytes_req: 187548 bytes_alloc: 206312
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1519 bytes_req: 157976 bytes_alloc: 194432
+ .
+ .
+ .
+ { call_site: [ffffffff8109bd3b] sched_autogroup_create_attach } hitcount: 2 bytes_req: 144 bytes_alloc: 192
+ { call_site: [ffffffff81097ee8] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff81213e80] load_elf_binary } hitcount: 3 bytes_req: 84 bytes_alloc: 96
+ { call_site: [ffffffff81079a2e] kthread_create_on_node } hitcount: 1 bytes_req: 56 bytes_alloc: 64
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
+ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8 bytes_alloc: 8
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
+
+ Totals:
+ Hits: 66598
+ Entries: 65
+ Dropped: 0
+
+ Finally, to finish off our kmalloc example, instead of simply having
+ the hist trigger display symbolic call_sites, we can have the hist
+ trigger additionally display the complete set of kernel stack traces
+ that led to each call_sites. To do that, we simply use the special
+ value 'stacktrace' for the key param:
+
+ # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ The above trigger will use the kernel stack trace in effect when an
+ event is triggered as the key for the hash table. This allows the
+ enumeration of every kernel callpath that led up to a particular
+ event, along with a running total of any of the event fields for
+ that event. Here we tally bytes requested and bytes allocated for
+ every callpath in the system that led up to a kmalloc (in this case
+ every callpath to a kmalloc for a kernel compile):
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
+
+ { stacktrace:
+ __kmalloc_track_caller+0x10b/0x1a0
+ kmemdup+0x20/0x50
+ hidraw_report_event+0x8a/0x120 [hid]
+ hid_report_raw_event+0x3ea/0x440 [hid]
+ hid_input_report+0x112/0x190 [hid]
+ hid_irq_in+0xc2/0x260 [usbhid]
+ __usb_hcd_giveback_urb+0x72/0x120
+ usb_giveback_urb_bh+0x9e/0xe0
+ tasklet_hi_action+0xf8/0x100
+ __do_softirq+0x114/0x2c0
+ irq_exit+0xa5/0xb0
+ do_IRQ+0x5a/0xf0
+ ret_from_intr+0x0/0x30
+ cpuidle_enter+0x17/0x20
+ cpu_startup_entry+0x315/0x3e0
+ rest_init+0x7c/0x80
+ } hitcount: 3 bytes_req: 21 bytes_alloc: 24
+ { stacktrace:
+ __kmalloc_track_caller+0x10b/0x1a0
+ kmemdup+0x20/0x50
+ hidraw_report_event+0x8a/0x120 [hid]
+ hid_report_raw_event+0x3ea/0x440 [hid]
+ hid_input_report+0x112/0x190 [hid]
+ hid_irq_in+0xc2/0x260 [usbhid]
+ __usb_hcd_giveback_urb+0x72/0x120
+ usb_giveback_urb_bh+0x9e/0xe0
+ tasklet_hi_action+0xf8/0x100
+ __do_softirq+0x114/0x2c0
+ irq_exit+0xa5/0xb0
+ do_IRQ+0x5a/0xf0
+ ret_from_intr+0x0/0x30
+ } hitcount: 3 bytes_req: 21 bytes_alloc: 24
+ { stacktrace:
+ kmem_cache_alloc_trace+0xeb/0x150
+ aa_alloc_task_context+0x27/0x40
+ apparmor_cred_prepare+0x1f/0x50
+ security_prepare_creds+0x16/0x20
+ prepare_creds+0xdf/0x1a0
+ SyS_capset+0xb5/0x200
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 1 bytes_req: 32 bytes_alloc: 32
+ .
+ .
+ .
+ { stacktrace:
+ __kmalloc+0x11b/0x1b0
+ i915_gem_execbuffer2+0x6c/0x2c0 [i915]
+ drm_ioctl+0x349/0x670 [drm]
+ do_vfs_ioctl+0x2f0/0x4f0
+ SyS_ioctl+0x81/0xa0
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808
+ { stacktrace:
+ __kmalloc+0x11b/0x1b0
+ load_elf_phdrs+0x76/0xa0
+ load_elf_binary+0x102/0x1650
+ search_binary_handler+0x97/0x1d0
+ do_execveat_common.isra.34+0x551/0x6e0
+ SyS_execve+0x3a/0x50
+ return_from_execve+0x0/0x23
+ } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048
+ { stacktrace:
+ kmem_cache_alloc_trace+0xeb/0x150
+ apparmor_file_alloc_security+0x27/0x40
+ security_file_alloc+0x16/0x20
+ get_empty_filp+0x93/0x1c0
+ path_openat+0x31/0x5f0
+ do_filp_open+0x3a/0x90
+ do_sys_open+0x128/0x220
+ SyS_open+0x1e/0x20
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376
+ { stacktrace:
+ __kmalloc+0x11b/0x1b0
+ seq_buf_alloc+0x1b/0x50
+ seq_read+0x2cc/0x370
+ proc_reg_read+0x3d/0x80
+ __vfs_read+0x28/0xe0
+ vfs_read+0x86/0x140
+ SyS_read+0x46/0xb0
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 19133 bytes_req: 78368768 bytes_alloc: 78368768
+
+ Totals:
+ Hits: 6085872
+ Entries: 253
+ Dropped: 0
+
+ If you key a hist trigger on common_pid, in order for example to
+ gather and display sorted totals for each process, you can use the
+ special .execname modifier to display the executable names for the
+ processes in the table rather than raw pids. The example below
+ keeps a per-process sum of total bytes read:
+
+ # echo 'hist:key=common_pid.execname:val=count:sort=count.descending' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
+
+ # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/hist
+ # trigger info: hist:keys=common_pid.execname:vals=count:sort=count.descending:size=2048 [active]
+
+ { common_pid: gnome-terminal [ 3196] } hitcount: 280 count: 1093512
+ { common_pid: Xorg [ 1309] } hitcount: 525 count: 256640
+ { common_pid: compiz [ 2889] } hitcount: 59 count: 254400
+ { common_pid: bash [ 8710] } hitcount: 3 count: 66369
+ { common_pid: dbus-daemon-lau [ 8703] } hitcount: 49 count: 47739
+ { common_pid: irqbalance [ 1252] } hitcount: 27 count: 27648
+ { common_pid: 01ifupdown [ 8705] } hitcount: 3 count: 17216
+ { common_pid: dbus-daemon [ 772] } hitcount: 10 count: 12396
+ { common_pid: Socket Thread [ 8342] } hitcount: 11 count: 11264
+ { common_pid: nm-dhcp-client. [ 8701] } hitcount: 6 count: 7424
+ { common_pid: gmain [ 1315] } hitcount: 18 count: 6336
+ .
+ .
+ .
+ { common_pid: postgres [ 1892] } hitcount: 2 count: 32
+ { common_pid: postgres [ 1891] } hitcount: 2 count: 32
+ { common_pid: gmain [ 8704] } hitcount: 2 count: 32
+ { common_pid: upstart-dbus-br [ 2740] } hitcount: 21 count: 21
+ { common_pid: nm-dispatcher.a [ 8696] } hitcount: 1 count: 16
+ { common_pid: indicator-datet [ 2904] } hitcount: 1 count: 16
+ { common_pid: gdbus [ 2998] } hitcount: 1 count: 16
+ { common_pid: rtkit-daemon [ 2052] } hitcount: 1 count: 8
+ { common_pid: init [ 1] } hitcount: 2 count: 2
+
+ Totals:
+ Hits: 2116
+ Entries: 51
+ Dropped: 0
+
+ Similarly, if you key a hist trigger on syscall id, for example to
+ gather and display a list of systemwide syscall hits, you can use
+ the special .syscall modifier to display the syscall names rather
+ than raw ids. The example below keeps a running total of syscall
+ counts for the system during the run:
+
+ # echo 'hist:key=id.syscall:val=hitcount' > \
+ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+
+ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+ # trigger info: hist:keys=id.syscall:vals=hitcount:sort=hitcount:size=2048 [active]
+
+ { id: sys_fsync [ 74] } hitcount: 1
+ { id: sys_newuname [ 63] } hitcount: 1
+ { id: sys_prctl [157] } hitcount: 1
+ { id: sys_statfs [137] } hitcount: 1
+ { id: sys_symlink [ 88] } hitcount: 1
+ { id: sys_sendmmsg [307] } hitcount: 1
+ { id: sys_semctl [ 66] } hitcount: 1
+ { id: sys_readlink [ 89] } hitcount: 3
+ { id: sys_bind [ 49] } hitcount: 3
+ { id: sys_getsockname [ 51] } hitcount: 3
+ { id: sys_unlink [ 87] } hitcount: 3
+ { id: sys_rename [ 82] } hitcount: 4
+ { id: unknown_syscall [ 58] } hitcount: 4
+ { id: sys_connect [ 42] } hitcount: 4
+ { id: sys_getpid [ 39] } hitcount: 4
+ .
+ .
+ .
+ { id: sys_rt_sigprocmask [ 14] } hitcount: 952
+ { id: sys_futex [202] } hitcount: 1534
+ { id: sys_write [ 1] } hitcount: 2689
+ { id: sys_setitimer [ 38] } hitcount: 2797
+ { id: sys_read [ 0] } hitcount: 3202
+ { id: sys_select [ 23] } hitcount: 3773
+ { id: sys_writev [ 20] } hitcount: 4531
+ { id: sys_poll [ 7] } hitcount: 8314
+ { id: sys_recvmsg [ 47] } hitcount: 13738
+ { id: sys_ioctl [ 16] } hitcount: 21843
+
+ Totals:
+ Hits: 67612
+ Entries: 72
+ Dropped: 0
+
+ The syscall counts above provide a rough overall picture of system
+ call activity on the system; we can see for example that the most
+ popular system call on this system was the 'sys_ioctl' system call.
+
+ We can use 'compound' keys to refine that number and provide some
+ further insight as to which processes exactly contribute to the
+ overall ioctl count.
+
+ The command below keeps a hitcount for every unique combination of
+ system call id and pid - the end result is essentially a table
+ that keeps a per-pid sum of system call hits. The results are
+ sorted using the system call id as the primary key, and the
+ hitcount sum as the secondary key:
+
+ # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount' > \
+ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+
+ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+ # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 [active]
+
+ { id: sys_read [ 0], common_pid: rtkit-daemon [ 1877] } hitcount: 1
+ { id: sys_read [ 0], common_pid: gdbus [ 2976] } hitcount: 1
+ { id: sys_read [ 0], common_pid: console-kit-dae [ 3400] } hitcount: 1
+ { id: sys_read [ 0], common_pid: postgres [ 1865] } hitcount: 1
+ { id: sys_read [ 0], common_pid: deja-dup-monito [ 3543] } hitcount: 2
+ { id: sys_read [ 0], common_pid: NetworkManager [ 890] } hitcount: 2
+ { id: sys_read [ 0], common_pid: evolution-calen [ 3048] } hitcount: 2
+ { id: sys_read [ 0], common_pid: postgres [ 1864] } hitcount: 2
+ { id: sys_read [ 0], common_pid: nm-applet [ 3022] } hitcount: 2
+ { id: sys_read [ 0], common_pid: whoopsie [ 1212] } hitcount: 2
+ .
+ .
+ .
+ { id: sys_ioctl [ 16], common_pid: bash [ 8479] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: bash [ 3472] } hitcount: 12
+ { id: sys_ioctl [ 16], common_pid: gnome-terminal [ 3199] } hitcount: 16
+ { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 1808
+ { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 5580
+ .
+ .
+ .
+ { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2690] } hitcount: 3
+ { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2688] } hitcount: 16
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 975] } hitcount: 2
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3204] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 2888] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3003] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 2873] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3196] } hitcount: 6
+ { id: sys_openat [257], common_pid: java [ 2623] } hitcount: 2
+ { id: sys_eventfd2 [290], common_pid: ibus-ui-gtk3 [ 2760] } hitcount: 4
+ { id: sys_eventfd2 [290], common_pid: compiz [ 2994] } hitcount: 6
+
+ Totals:
+ Hits: 31536
+ Entries: 323
+ Dropped: 0
+
+ The above list does give us a breakdown of the ioctl syscall by
+ pid, but it also gives us quite a bit more than that, which we
+ don't really care about at the moment. Since we know the syscall
+ id for sys_ioctl (16, displayed next to the sys_ioctl name), we
+ can use that to filter out all the other syscalls:
+
+ # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount if id == 16' > \
+ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+
+ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+ # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 if id == 16 [active]
+
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2769] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: evolution-addre [ 8571] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 3003] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2781] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2829] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: bash [ 8726] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: bash [ 8508] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2970] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2768] } hitcount: 1
+ .
+ .
+ .
+ { id: sys_ioctl [ 16], common_pid: pool [ 8559] } hitcount: 45
+ { id: sys_ioctl [ 16], common_pid: pool [ 8555] } hitcount: 48
+ { id: sys_ioctl [ 16], common_pid: pool [ 8551] } hitcount: 48
+ { id: sys_ioctl [ 16], common_pid: avahi-daemon [ 896] } hitcount: 66
+ { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 26674
+ { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 73443
+
+ Totals:
+ Hits: 101162
+ Entries: 103
+ Dropped: 0
+
+ The above output shows that 'compiz' and 'Xorg' are far and away
+ the heaviest ioctl callers (do the really need to do so much of
+ that?).
+
+ The compound key examples used a key and a sum value (hitcount) to
+ sort the output, but we can just as easily use two keys instead.
+ Here's an example where we use a compound key composed of the the
+ common_pid and size event fields. Sorting with pid as the primary
+ key and 'size' as the secondary key allows us to display an
+ ordered summary of the recvfrom sizes, with counts, received by
+ each process:
+
+ # echo 'hist:key=common_pid.execname,size:val=hitcount:sort=common_pid,size' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/trigger
+
+ # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/hist
+ # trigger info: hist:keys=common_pid.execname,size:vals=hitcount:sort=common_pid.execname,size:size=2048 [active]
+
+ { common_pid: smbd [ 784], size: 4 } hitcount: 1
+ { common_pid: dnsmasq [ 1412], size: 4096 } hitcount: 672
+ { common_pid: postgres [ 1796], size: 1000 } hitcount: 6
+ { common_pid: postgres [ 1867], size: 1000 } hitcount: 10
+ { common_pid: bamfdaemon [ 2787], size: 28 } hitcount: 2
+ { common_pid: bamfdaemon [ 2787], size: 14360 } hitcount: 1
+ { common_pid: compiz [ 2994], size: 8 } hitcount: 1
+ { common_pid: compiz [ 2994], size: 20 } hitcount: 11
+ { common_pid: gnome-terminal [ 3199], size: 4 } hitcount: 2
+ { common_pid: firefox [ 8817], size: 4 } hitcount: 1
+ { common_pid: firefox [ 8817], size: 8 } hitcount: 5
+ { common_pid: firefox [ 8817], size: 588 } hitcount: 2
+ { common_pid: firefox [ 8817], size: 628 } hitcount: 1
+ { common_pid: firefox [ 8817], size: 6944 } hitcount: 1
+ { common_pid: firefox [ 8817], size: 408880 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 8 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 160 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 320 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 352 } hitcount: 1
+ .
+ .
+ .
+ { common_pid: pool [ 8923], size: 1960 } hitcount: 10
+ { common_pid: pool [ 8923], size: 2048 } hitcount: 10
+ { common_pid: pool [ 8924], size: 1960 } hitcount: 10
+ { common_pid: pool [ 8924], size: 2048 } hitcount: 10
+ { common_pid: pool [ 8928], size: 1964 } hitcount: 4
+ { common_pid: pool [ 8928], size: 1965 } hitcount: 2
+ { common_pid: pool [ 8928], size: 2048 } hitcount: 6
+ { common_pid: pool [ 8929], size: 1982 } hitcount: 1
+ { common_pid: pool [ 8929], size: 2048 } hitcount: 1
+
+ Totals:
+ Hits: 2016
+ Entries: 224
+ Dropped: 0
+
+ The above example also illustrates the fact that although a compound
+ key is treated as a single entity for hashing purposes, the sub-keys
+ it's composed of can be accessed independently.
+
+ The next example uses a string field as the hash key and
+ demonstrates how you can manually pause and continue a hist trigger.
+ In this example, we'll aggregate fork counts and don't expect a
+ large number of entries in the hash table, so we'll drop it to a
+ much smaller number, say 256:
+
+ # echo 'hist:key=child_comm:val=hitcount:size=256' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
+
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: ibus-daemon } hitcount: 1
+ { child_comm: whoopsie } hitcount: 1
+ { child_comm: smbd } hitcount: 1
+ { child_comm: gdbus } hitcount: 1
+ { child_comm: kthreadd } hitcount: 1
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: evolution-alarm } hitcount: 2
+ { child_comm: Socket Thread } hitcount: 2
+ { child_comm: postgres } hitcount: 2
+ { child_comm: bash } hitcount: 3
+ { child_comm: compiz } hitcount: 3
+ { child_comm: evolution-sourc } hitcount: 4
+ { child_comm: dhclient } hitcount: 4
+ { child_comm: pool } hitcount: 5
+ { child_comm: nm-dispatcher.a } hitcount: 8
+ { child_comm: firefox } hitcount: 8
+ { child_comm: dbus-daemon } hitcount: 8
+ { child_comm: glib-pacrunner } hitcount: 10
+ { child_comm: evolution } hitcount: 23
+
+ Totals:
+ Hits: 89
+ Entries: 20
+ Dropped: 0
+
+ If we want to pause the hist trigger, we can simply append :pause to
+ the command that started the trigger. Notice that the trigger info
+ displays as [paused]:
+
+ # echo 'hist:key=child_comm:val=hitcount:size=256:pause' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [paused]
+
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: kthreadd } hitcount: 1
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: gdbus } hitcount: 1
+ { child_comm: ibus-daemon } hitcount: 1
+ { child_comm: Socket Thread } hitcount: 2
+ { child_comm: evolution-alarm } hitcount: 2
+ { child_comm: smbd } hitcount: 2
+ { child_comm: bash } hitcount: 3
+ { child_comm: whoopsie } hitcount: 3
+ { child_comm: compiz } hitcount: 3
+ { child_comm: evolution-sourc } hitcount: 4
+ { child_comm: pool } hitcount: 5
+ { child_comm: postgres } hitcount: 6
+ { child_comm: firefox } hitcount: 8
+ { child_comm: dhclient } hitcount: 10
+ { child_comm: emacs } hitcount: 12
+ { child_comm: dbus-daemon } hitcount: 20
+ { child_comm: nm-dispatcher.a } hitcount: 20
+ { child_comm: evolution } hitcount: 35
+ { child_comm: glib-pacrunner } hitcount: 59
+
+ Totals:
+ Hits: 199
+ Entries: 21
+ Dropped: 0
+
+ To manually continue having the trigger aggregate events, append
+ :cont instead. Notice that the trigger info displays as [active]
+ again, and the data has changed:
+
+ # echo 'hist:key=child_comm:val=hitcount:size=256:cont' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
+
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: kthreadd } hitcount: 1
+ { child_comm: gdbus } hitcount: 1
+ { child_comm: ibus-daemon } hitcount: 1
+ { child_comm: Socket Thread } hitcount: 2
+ { child_comm: evolution-alarm } hitcount: 2
+ { child_comm: smbd } hitcount: 2
+ { child_comm: whoopsie } hitcount: 3
+ { child_comm: compiz } hitcount: 3
+ { child_comm: evolution-sourc } hitcount: 4
+ { child_comm: bash } hitcount: 5
+ { child_comm: pool } hitcount: 5
+ { child_comm: postgres } hitcount: 6
+ { child_comm: firefox } hitcount: 8
+ { child_comm: dhclient } hitcount: 11
+ { child_comm: emacs } hitcount: 12
+ { child_comm: dbus-daemon } hitcount: 22
+ { child_comm: nm-dispatcher.a } hitcount: 22
+ { child_comm: evolution } hitcount: 35
+ { child_comm: glib-pacrunner } hitcount: 59
+
+ Totals:
+ Hits: 206
+ Entries: 21
+ Dropped: 0
+
+ The previous example showed how to start and stop a hist trigger by
+ appending 'pause' and 'continue' to the hist trigger command. A
+ hist trigger can also be started in a paused state by initially
+ starting the trigger with ':pause' appended. This allows you to
+ start the trigger only when you're ready to start collecting data
+ and not before. For example, start the trigger in a paused state,
+ then unpause it and do something you want to measure, then pause the
+ trigger when done.
+
+ Of course, doing this manually can be difficult and error-prone, but
+ it is possible to automatically start and stop a hist trigger based
+ on some condition, via the enable_hist and disable_hist triggers.
+
+ For example, suppose we wanted to take a look at the relative
+ weights in terms of skb length for each callpath that leads to a
+ netif_receieve_skb event when downloading a decent-sized file using
+ wget.
+
+ First we set up an initially paused stacktrace trigger on the
+ netif_receive_skb event:
+
+ # echo 'hist:key=stacktrace:vals=len:pause' > \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ Next, we set up an 'enable_hist' trigger on the sched_process_exec
+ event, with an 'if filename==/usr/bin/wget' filter. The effect of
+ this new trigger is that it will 'unpause' the hist trigger we just
+ set up on netif_receive_skb if and only if it sees a
+ sched_process_exec event with a filename of '/usr/bin/wget'. When
+ that happens, all netif_receive_skb events are aggregated into a
+ hash table keyed on stacktrace:
+
+ # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+
+ The aggregation continues until the netif_receive_skb is paused
+ again, which is what the following disable_hist event does by
+ creating a similar setup on the sched_process_exit event, using the
+ filter 'comm==wget':
+
+ # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+
+ Whenever a process exits and the comm field of the disable_hist
+ trigger filter matches 'comm==wget', the netif_receive_skb hist
+ trigger is disabled.
+
+ The overall effect is that netif_received_skb events are aggregated
+ into the hash table for only the duration of the wget. Executing a
+ wget command and then listing the 'hist' file will display the
+ output generated by the wget command:
+
+ $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+ # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
+
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ netif_receive_skb_internal+0x23/0x90
+ napi_gro_receive+0xc8/0x100
+ ieee80211_deliver_skb+0xd6/0x270 [mac80211]
+ ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
+ ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
+ ieee80211_rx+0x31d/0x900 [mac80211]
+ iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
+ iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
+ iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
+ irq_thread_fn+0x20/0x50
+ irq_thread+0x11f/0x150
+ kthread+0xd2/0xf0
+ ret_from_fork+0x42/0x70
+ } hitcount: 85 len: 28884
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ netif_receive_skb_internal+0x23/0x90
+ napi_gro_complete+0xa4/0xe0
+ dev_gro_receive+0x23a/0x360
+ napi_gro_receive+0x30/0x100
+ ieee80211_deliver_skb+0xd6/0x270 [mac80211]
+ ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
+ ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
+ ieee80211_rx+0x31d/0x900 [mac80211]
+ iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
+ iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
+ iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
+ irq_thread_fn+0x20/0x50
+ irq_thread+0x11f/0x150
+ kthread+0xd2/0xf0
+ } hitcount: 98 len: 664329
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ process_backlog+0xa8/0x150
+ net_rx_action+0x15d/0x340
+ __do_softirq+0x114/0x2c0
+ do_softirq_own_stack+0x1c/0x30
+ do_softirq+0x65/0x70
+ __local_bh_enable_ip+0xb5/0xc0
+ ip_finish_output+0x1f4/0x840
+ ip_output+0x6b/0xc0
+ ip_local_out_sk+0x31/0x40
+ ip_send_skb+0x1a/0x50
+ udp_send_skb+0x173/0x2a0
+ udp_sendmsg+0x2bf/0x9f0
+ inet_sendmsg+0x64/0xa0
+ sock_sendmsg+0x3d/0x50
+ } hitcount: 115 len: 13030
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ netif_receive_skb_internal+0x23/0x90
+ napi_gro_complete+0xa4/0xe0
+ napi_gro_flush+0x6d/0x90
+ iwl_pcie_irq_handler+0x92a/0x12f0 [iwlwifi]
+ irq_thread_fn+0x20/0x50
+ irq_thread+0x11f/0x150
+ kthread+0xd2/0xf0
+ ret_from_fork+0x42/0x70
+ } hitcount: 934 len: 5512212
+
+ Totals:
+ Hits: 1232
+ Entries: 4
+ Dropped: 0
+
+ The above shows all the netif_receive_skb callpaths and their total
+ lengths for the duration of the wget command.
+
+ The 'clear' hist trigger param can be used to clear the hash table.
+ Suppose we wanted to try another run of the previous example but
+ this time also wanted to see the complete list of events that went
+ into the histogram. In order to avoid having to set everything up
+ again, we can just clear the histogram first:
+
+ # echo 'hist:key=stacktrace:vals=len:clear' > \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ Just to verify that it is in fact cleared, here's what we now see in
+ the hist file:
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+ # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
+
+ Totals:
+ Hits: 0
+ Entries: 0
+ Dropped: 0
+
+ Since we want to see the detailed list of every netif_receive_skb
+ event occurring during the new run, which are in fact same events
+ being aggregated into the hash table, we add some additional
+ 'enable_event' events the triggering sched_process_exec and
+ sched_process_exit events as such:
+
+ # echo 'enable_event:net:netif_receive_skb if filename==/usr/bin/wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+
+ # echo 'disable_event:net:netif_receive_skb if comm==wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+
+ If you read the trigger files for the sched_process_exec and
+ sched_process_exit triggers, you should see two triggers for each:
+ one enabling/disabling the hist aggregation and the other
+ enabling/disabling the logging of events:
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+ enable_event:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
+ enable_hist:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+ enable_event:net:netif_receive_skb:unlimited if comm==wget
+ disable_hist:net:netif_receive_skb:unlimited if comm==wget
+
+ In other words, whenever either of the sched_process_exec or
+ sched_process_exit events is hit and matches 'wget', it enables or
+ disables both the histogram and the event log, and what you end up
+ with is a hash table and set of events just covering the specified
+ duration. Run the wget command again:
+
+ $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
+
+ Displaying the 'hist' file should show something similar to what you
+ saw in the last run, but this time you should also see the
+ individual events in the trace file:
+
+ # cat /sys/kernel/debug/tracing/trace
+
+ # tracer: nop
+ #
+ # entries-in-buffer/entries-written: 183/1426 #P:4
+ #
+ # _-----=> irqs-off
+ # / _----=> need-resched
+ # | / _---=> hardirq/softirq
+ # || / _--=> preempt-depth
+ # ||| / delay
+ # TASK-PID CPU# |||| TIMESTAMP FUNCTION
+ # | | | |||| | |
+ wget-15108 [000] ..s1 31769.606929: netif_receive_skb: dev=lo skbaddr=ffff88009c353100 len=60
+ wget-15108 [000] ..s1 31769.606999: netif_receive_skb: dev=lo skbaddr=ffff88009c353200 len=60
+ dnsmasq-1382 [000] ..s1 31769.677652: netif_receive_skb: dev=lo skbaddr=ffff88009c352b00 len=130
+ dnsmasq-1382 [000] ..s1 31769.685917: netif_receive_skb: dev=lo skbaddr=ffff88009c352200 len=138
+ ##### CPU 2 buffer started ####
+ irq/29-iwlwifi-559 [002] ..s. 31772.031529: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433d00 len=2948
+ irq/29-iwlwifi-559 [002] ..s. 31772.031572: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432200 len=1500
+ irq/29-iwlwifi-559 [002] ..s. 31772.032196: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433100 len=2948
+ irq/29-iwlwifi-559 [002] ..s. 31772.032761: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433000 len=2948
+ irq/29-iwlwifi-559 [002] ..s. 31772.033220: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432e00 len=1500
+ .
+ .
+ .
--
1.9.3

2015-07-16 17:50:13

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Thu, Jul 16, 2015 at 12:22:40PM -0500, Tom Zanussi wrote:
> + for (i = 0; i < elt->map->n_fields; i++) {
> + atomic64_set(&dup_elt->fields[i].sum,
> + atomic64_read(&elt->fields[i].sum));
> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> + }
> +
> + return dup_elt;
> +}

So there is a lot of atomic64_{set,read}() in this patch set, what kind
of magic properties do you assume they have?

Note that atomic*_{set,read}() are weaker than {WRITE,READ}_ONCE(), so
if you're assuming they do that, you're mistaken -- although it is on a
TODO list someplace to go fix that.

2015-07-16 18:03:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Thu, Jul 16, 2015 at 12:22:40PM -0500, Tom Zanussi wrote:
> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
> + GFP_KERNEL);

In a later email you state the max map size to be 128k, with a 16 byte
struct, that is 2m of memory for this allocation.

Isn't that a tad big for a kmalloc() ?

2015-07-16 21:33:50

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Thu, 2015-07-16 at 20:03 +0200, Peter Zijlstra wrote:
> On Thu, Jul 16, 2015 at 12:22:40PM -0500, Tom Zanussi wrote:
> > + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
> > + GFP_KERNEL);
>
> In a later email you state the max map size to be 128k, with a 16 byte
> struct, that is 2m of memory for this allocation.
>
> Isn't that a tad big for a kmalloc() ?

Yeah, that is a bit big for kmalloc (actually it's double that), though
I never ran into problems in my testing (of course that would depend on
the state of the system, and I mainly tested on a newly booted system).

It would probably make sense to make it page-based, which means a bit
more complicated mapping for the array (can't use vmalloc here) but that
shouldn't be too big a deal.

Tom

2015-07-16 21:41:49

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Thu, 2015-07-16 at 19:49 +0200, Peter Zijlstra wrote:
> On Thu, Jul 16, 2015 at 12:22:40PM -0500, Tom Zanussi wrote:
> > + for (i = 0; i < elt->map->n_fields; i++) {
> > + atomic64_set(&dup_elt->fields[i].sum,
> > + atomic64_read(&elt->fields[i].sum));
> > + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> > + }
> > +
> > + return dup_elt;
> > +}
>
> So there is a lot of atomic64_{set,read}() in this patch set, what kind
> of magic properties do you assume they have?
>
> Note that atomic*_{set,read}() are weaker than {WRITE,READ}_ONCE(), so
> if you're assuming they do that, you're mistaken -- although it is on a
> TODO list someplace to go fix that.

Not assuming any magic properties - I just need an atomic 64-bit counter
for the sums and that's the API for setting/reading those. When reading
a live trace the exact sum you get is kind of arbitrary..

Tom

2015-07-16 22:33:17

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Thu, Jul 16, 2015 at 04:41:45PM -0500, Tom Zanussi wrote:
> On Thu, 2015-07-16 at 19:49 +0200, Peter Zijlstra wrote:
> > On Thu, Jul 16, 2015 at 12:22:40PM -0500, Tom Zanussi wrote:
> > > + for (i = 0; i < elt->map->n_fields; i++) {
> > > + atomic64_set(&dup_elt->fields[i].sum,
> > > + atomic64_read(&elt->fields[i].sum));
> > > + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> > > + }
> > > +
> > > + return dup_elt;
> > > +}
> >
> > So there is a lot of atomic64_{set,read}() in this patch set, what kind
> > of magic properties do you assume they have?
> >
> > Note that atomic*_{set,read}() are weaker than {WRITE,READ}_ONCE(), so
> > if you're assuming they do that, you're mistaken -- although it is on a
> > TODO list someplace to go fix that.
>
> Not assuming any magic properties - I just need an atomic 64-bit counter
> for the sums and that's the API for setting/reading those. When reading
> a live trace the exact sum you get is kind of arbitrary..

OK, so atomic64_read() really should provide load consistency (there are
a few archs that lack the READ_ONCE() there).

But the atomic64_set() does not provide store consistency, and in the
above case it looks like the value you're writing is not exposed yet to
concurrency so it doesn't matter how it issues the store.

So as long as you never atomic64_set() a value that is subject to
concurrent modification you should be good.

2015-07-16 23:26:05

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

* Tom Zanussi wrote:
>> Add tracing_map, a special-purpose lock-free map for tracing.
>>
>> tracing_map is designed to aggregate or 'sum' one or more values
>> associated with a specific object of type tracing_map_elt, which
>> is associated by the map to a given key.
>>
>> It provides various hooks allowing per-tracer customization and is
>> separated out into a separate file in order to allow it to be shared
>> between multiple tracers, but isn't meant to be generally used outside
>> of that context.
>>
>> The tracing_map implementation was inspired by lock-free map
>> algorithms originated by Dr. Cliff Click:
>>
>> http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
>> http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf

Hi Tom,

First question: what is the rationale for implementing another
hash table from scratch here ? What is missing in the pre-existing
hash table implementations ?

Moreover, you might want to handle the case where jhash() returns
0. AFAIU, there is a race on "insert" in this scenario.

Thanks,

Mathieu

>>
>> Signed-off-by: Tom Zanussi <[email protected]>
>> ---
>> kernel/trace/Makefile | 1 +
>> kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
>> kernel/trace/tracing_map.h | 258 +++++++++++++
>> 3 files changed, 1194 insertions(+)
>> create mode 100644 kernel/trace/tracing_map.c
>> create mode 100644 kernel/trace/tracing_map.h
>>
>> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
>> index 9b1044e..3b26cfb 100644
>> --- a/kernel/trace/Makefile
>> +++ b/kernel/trace/Makefile
>> @@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
>> obj-$(CONFIG_TRACING) += trace_seq.o
>> obj-$(CONFIG_TRACING) += trace_stat.o
>> obj-$(CONFIG_TRACING) += trace_printk.o
>> +obj-$(CONFIG_TRACING) += tracing_map.o
>> obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
>> obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
>> obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
>> diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
>> new file mode 100644
>> index 0000000..a505025
>> --- /dev/null
>> +++ b/kernel/trace/tracing_map.c
>> @@ -0,0 +1,935 @@
>> +/*
>> + * tracing_map - lock-free map for tracing
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2015 Tom Zanussi <[email protected]>
>> + *
>> + * tracing_map implementation inspired by lock-free map algorithms
>> + * originated by Dr. Cliff Click:
>> + *
>> + * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
>> + * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
>> + */
>> +
>> +#include <linux/slab.h>
>> +#include <linux/jhash.h>
>> +#include <linux/sort.h>
>> +
>> +#include "tracing_map.h"
>> +#include "trace.h"
>> +
>> +/*
>> + * NOTE: For a detailed description of the data structures used by
>> + * these functions (such as tracing_map_elt) please see the overview
>> + * of tracing_map data structures at the beginning of tracing_map.h.
>> + */
>> +
>> +/**
>> + * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
>> + * @elt: The tracing_map_elt
>> + * @i: The index of the given sum associated with the tracing_map_elt
>> + * @n: The value to add to the sum
>> + *
>> + * Add n to sum i associated with the specified tracing_map_elt
>> + * instance. The index i is the index returned by the call to
>> + * tracing_map_add_sum_field() when the tracing map was set up.
>> + */
>> +void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
>> +{
>> + atomic64_add(n, &elt->fields[i].sum);
>> +}
>> +
>> +/**
>> + * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
>> + * @elt: The tracing_map_elt
>> + * @i: The index of the given sum associated with the tracing_map_elt
>> + *
>> + * Retrieve the value of the sum i associated with the specified
>> + * tracing_map_elt instance. The index i is the index returned by the
>> + * call to tracing_map_add_sum_field() when the tracing map was set
>> + * up.
>> + *
>> + * Return: The sum associated with field i for elt.
>> + */
>> +u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
>> +{
>> + return (u64)atomic64_read(&elt->fields[i].sum);
>> +}
>> +
>> +int tracing_map_cmp_string(void *val_a, void *val_b)
>> +{
>> + char *a = val_a;
>> + char *b = val_b;
>> +
>> + return strcmp(a, b);
>> +}
>> +
>> +int tracing_map_cmp_none(void *val_a, void *val_b)
>> +{
>> + return 0;
>> +}
>> +
>> +static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
>> +{
>> + u64 a = atomic64_read((atomic64_t *)val_a);
>> + u64 b = atomic64_read((atomic64_t *)val_b);
>> +
>> + return (a > b) ? 1 : ((a < b) ? -1 : 0);
>> +}
>> +
>> +#define DEFINE_TRACING_MAP_CMP_FN(type) \
>> +static int tracing_map_cmp_##type(void *val_a, void *val_b) \
>> +{ \
>> + type a = *(type *)val_a; \
>> + type b = *(type *)val_b; \
>> + \
>> + return (a > b) ? 1 : ((a < b) ? -1 : 0); \
>> +}
>> +
>> +DEFINE_TRACING_MAP_CMP_FN(s64);
>> +DEFINE_TRACING_MAP_CMP_FN(u64);
>> +DEFINE_TRACING_MAP_CMP_FN(s32);
>> +DEFINE_TRACING_MAP_CMP_FN(u32);
>> +DEFINE_TRACING_MAP_CMP_FN(s16);
>> +DEFINE_TRACING_MAP_CMP_FN(u16);
>> +DEFINE_TRACING_MAP_CMP_FN(s8);
>> +DEFINE_TRACING_MAP_CMP_FN(u8);
>> +
>> +tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
>> + int field_is_signed)
>> +{
>> + tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
>> +
>> + switch (field_size) {
>> + case 8:
>> + if (field_is_signed)
>> + fn = tracing_map_cmp_s64;
>> + else
>> + fn = tracing_map_cmp_u64;
>> + break;
>> + case 4:
>> + if (field_is_signed)
>> + fn = tracing_map_cmp_s32;
>> + else
>> + fn = tracing_map_cmp_u32;
>> + break;
>> + case 2:
>> + if (field_is_signed)
>> + fn = tracing_map_cmp_s16;
>> + else
>> + fn = tracing_map_cmp_u16;
>> + break;
>> + case 1:
>> + if (field_is_signed)
>> + fn = tracing_map_cmp_s8;
>> + else
>> + fn = tracing_map_cmp_u8;
>> + break;
>> + }
>> +
>> + return fn;
>> +}
>> +
>> +static int tracing_map_add_field(struct tracing_map *map,
>> + tracing_map_cmp_fn_t cmp_fn)
>> +{
>> + int ret = -EINVAL;
>> +
>> + if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
>> + ret = map->n_fields;
>> + map->fields[map->n_fields++].cmp_fn = cmp_fn;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +/**
>> + * tracing_map_add_sum_field - Add a field describing a tracing_map sum
>> + * @map: The tracing_map
>> + *
>> + * Add a sum field to the key and return the index identifying it in
>> + * the map and associated tracing_map_elts. This is the index used
>> + * for instance to update a sum for a particular tracing_map_elt using
>> + * tracing_map_update_sum() or reading it via tracing_map_read_sum().
>> + *
>> + * Return: The index identifying the field in the map and associated
>> + * tracing_map_elts.
>> + */
>> +int tracing_map_add_sum_field(struct tracing_map *map)
>> +{
>> + return tracing_map_add_field(map, tracing_map_cmp_atomic64);
>> +}
>> +
>> +/**
>> + * tracing_map_add_key_field - Add a field describing a tracing_map key
>> + * @map: The tracing_map
>> + * @offset: The offset within the key
>> + * @cmp_fn: The comparison function that will be used to sort on the key
>> + *
>> + * Let the map know there is a key and that if it's used as a sort key
>> + * to use cmp_fn.
>> + *
>> + * A key can be a subset of a compound key; for that purpose, the
>> + * offset param is used to describe where within the the compound key
>> + * the key referenced by this key field resides.
>> + *
>> + * Return: The index identifying the field in the map and associated
>> + * tracing_map_elts.
>> + */
>> +int tracing_map_add_key_field(struct tracing_map *map,
>> + unsigned int offset,
>> + tracing_map_cmp_fn_t cmp_fn)
>> +
>> +{
>> + int idx = tracing_map_add_field(map, cmp_fn);
>> +
>> + if (idx < 0)
>> + return idx;
>> +
>> + map->fields[idx].offset = offset;
>> +
>> + map->key_idx[map->n_keys++] = idx;
>> +
>> + return idx;
>> +}
>> +
>> +static void tracing_map_elt_clear(struct tracing_map_elt *elt)
>> +{
>> + unsigned i;
>> +
>> + for (i = 0; i < elt->map->n_fields; i++)
>> + if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
>> + atomic64_set(&elt->fields[i].sum, 0);
>> +
>> + if (elt->map->ops && elt->map->ops->elt_clear)
>> + elt->map->ops->elt_clear(elt);
>> +}
>> +
>> +static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
>> +{
>> + unsigned int i;
>> +
>> + tracing_map_elt_clear(elt);
>> +
>> + for (i = 0; i < elt->map->n_fields; i++) {
>> + elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
>> +
>> + if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
>> + elt->fields[i].offset = elt->map->fields[i].offset;
>> + }
>> +}
>> +
>> +static void tracing_map_elt_free(struct tracing_map_elt *elt)
>> +{
>> + if (!elt)
>> + return;
>> +
>> + if (elt->map->ops && elt->map->ops->elt_free)
>> + elt->map->ops->elt_free(elt);
>> + kfree(elt->fields);
>> + kfree(elt->key);
>> + kfree(elt);
>> +}
>> +
>> +static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
>> +{
>> + struct tracing_map_elt *elt;
>> + int err = 0;
>> +
>> + elt = kzalloc(sizeof(*elt), GFP_KERNEL);
>> + if (!elt)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + elt->map = map;
>> +
>> + elt->key = kzalloc(map->key_size, GFP_KERNEL);
>> + if (!elt->key) {
>> + err = -ENOMEM;
>> + goto free;
>> + }
>> +
>> + elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
>> + if (!elt->fields) {
>> + err = -ENOMEM;
>> + goto free;
>> + }
>> +
>> + tracing_map_elt_init_fields(elt);
>> +
>> + if (map->ops && map->ops->elt_alloc) {
>> + err = map->ops->elt_alloc(elt);
>> + if (err)
>> + goto free;
>> + }
>> + return elt;
>> + free:
>> + tracing_map_elt_free(elt);
>> +
>> + return ERR_PTR(err);
>> +}
>> +
>> +static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
>> +{
>> + struct tracing_map_elt *elt = NULL;
>> + int idx;
>> +
>> + idx = atomic_inc_return(&map->next_elt);
>> + if (idx < map->max_elts) {
>> + elt = map->elts[idx];
>> + if (map->ops && map->ops->elt_init)
>> + map->ops->elt_init(elt);
>> + }
>> +
>> + return elt;
>> +}
>> +
>> +static void tracing_map_free_elts(struct tracing_map *map)
>> +{
>> + unsigned int i;
>> +
>> + if (!map->elts)
>> + return;
>> +
>> + for (i = 0; i < map->max_elts; i++)
>> + tracing_map_elt_free(map->elts[i]);
>> +
>> + kfree(map->elts);
>> +}
>> +
>> +static int tracing_map_alloc_elts(struct tracing_map *map)
>> +{
>> + unsigned int i;
>> +
>> + map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
>> + GFP_KERNEL);
>> + if (!map->elts)
>> + return -ENOMEM;
>> +
>> + for (i = 0; i < map->max_elts; i++) {
>> + map->elts[i] = tracing_map_elt_alloc(map);
>> + if (!map->elts[i]) {
>> + tracing_map_free_elts(map);
>> +
>> + return -ENOMEM;
>> + }
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static inline bool keys_match(void *key, void *test_key, unsigned key_size)
>> +{
>> + bool match = true;
>> +
>> + if (memcmp(key, test_key, key_size))
>> + match = false;
>> +
>> + return match;
>> +}
>> +
>> +/**
>> + * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
>> + * @map: The tracing_map to insert into
>> + * @key: The key to insert
>> + *
>> + * Inserts a key into a tracing_map and creates and returns a new
>> + * tracing_map_elt for it, or if the key has already been inserted by
>> + * a previous call, returns the tracing_map_elt already associated
>> + * with it. When the map was created, the number of elements to be
>> + * allocated for the map was specified (internally maintained as
>> + * 'max_elts' in struct tracing_map), and that number of
>> + * tracing_map_elts was created by tracing_map_init(). This is the
>> + * pre-allocated pool of tracing_map_elts that tracing_map_insert()
>> + * will allocate from when adding new keys. Once that pool is
>> + * exhausted, tracing_map_insert() is useless and will return NULL to
>> + * signal that state.
>> + *
>> + * This is a lock-free tracing map insertion function implementing a
>> + * modified form of Cliff Click's basic insertion algorithm. It
>> + * requires the table size be a power of two. To prevent any
>> + * possibility of an infinite loop we always make the internal table
>> + * size double the size of the requested table size (max_elts * 2).
>> + * Likewise, we never reuse a slot or resize or delete elements - when
>> + * we've reached max_elts entries, we simply return NULL once we've
>> + * run out of entries. Readers can at any point in time traverse the
>> + * tracing map and safely access the key/val pairs.
>> + *
>> + * Return: the tracing_map_elt pointer val associated with the key.
>> + * If this was a newly inserted key, the val will be a newly allocated
>> + * and associated tracing_map_elt pointer val. If the key wasn't
>> + * found and the pool of tracing_map_elts has been exhausted, NULL is
>> + * returned and no further insertions will succeed.
>> + */
>> +struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
>> +{
>> + u32 idx, key_hash, test_key;
>> +
>> + key_hash = jhash(key, map->key_size, 0);
>> + idx = key_hash >> (32 - (map->map_bits + 1));
>> +
>> + while (1) {
>> + idx &= (map->map_size - 1);
>> + test_key = map->map[idx].key;
>> +
>> + if (test_key && test_key == key_hash && map->map[idx].val &&
>> + keys_match(key, map->map[idx].val->key, map->key_size))
>> + return map->map[idx].val;
>> +
>> + if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
>> + struct tracing_map_elt *elt;
>> +
>> + elt = get_free_elt(map);
>> + if (!elt)
>> + break;
>> + memcpy(elt->key, key, map->key_size);
>> + map->map[idx].val = elt;
>> +
>> + return map->map[idx].val;
>> + }
>> + idx++;
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +/**
>> + * tracing_map_destroy - Destroy a tracing_map
>> + * @map: The tracing_map to destroy
>> + *
>> + * Frees a tracing_map along with its associated array of
>> + * tracing_map_elts.
>> + *
>> + * Callers should make sure there are no readers or writers actively
>> + * reading or inserting into the map before calling this.
>> + */
>> +void tracing_map_destroy(struct tracing_map *map)
>> +{
>> + if (!map)
>> + return;
>> +
>> + tracing_map_free_elts(map);
>> +
>> + kfree(map->map);
>> + kfree(map);
>> +}
>> +
>> +/**
>> + * tracing_map_clear - Clear a tracing_map
>> + * @map: The tracing_map to clear
>> + *
>> + * Resets the tracing map to a cleared or initial state. The
>> + * tracing_map_elts are all cleared, and the array of struct
>> + * tracing_map_entry is reset to an initialized state.
>> + *
>> + * Callers should make sure there are no writers actively inserting
>> + * into the map before calling this.
>> + */
>> +void tracing_map_clear(struct tracing_map *map)
>> +{
>> + unsigned int i, size;
>> +
>> + atomic_set(&map->next_elt, -1);
>> +
>> + size = map->map_size * sizeof(struct tracing_map_entry);
>> + memset(map->map, 0, size);
>> +
>> + for (i = 0; i < map->max_elts; i++)
>> + tracing_map_elt_clear(map->elts[i]);
>> +}
>> +
>> +static void set_sort_key(struct tracing_map *map,
>> + struct tracing_map_sort_key *sort_key)
>> +{
>> + map->sort_key = *sort_key;
>> +}
>> +
>> +/**
>> + * tracing_map_create - Create a lock-free map and element pool
>> + * @map_bits: The size of the map (2 ** map_bits)
>> + * @key_size: The size of the key for the map in bytes
>> + * @ops: Optional client-defined tracing_map_ops instance
>> + * @private_data: Client data associated with the map
>> + *
>> + * Creates and sets up a map to contain 2 ** map_bits number of
>> + * elements (internally maintained as 'max_elts' in struct
>> + * tracing_map). Before using, map fields should be added to the map
>> + * with tracing_map_add_sum_field() and tracing_map_add_key_field().
>> + * tracing_map_init() should then be called to allocate the array of
>> + * tracing_map_elts, in order to avoid allocating anything in the map
>> + * insertion path. The user-specified map size reflects the maximum
>> + * number of elements that can be contained in the table requested by
>> + * the user - internally we double that in order to keep the table
>> + * sparse and keep collisions manageable.
>> + *
>> + * A tracing_map is a special-purpose map designed to aggregate or
>> + * 'sum' one or more values associated with a specific object of type
>> + * tracing_map_elt, which is attached by the map to a given key.
>> + *
>> + * tracing_map_create() sets up the map itself, and provides
>> + * operations for inserting tracing_map_elts, but doesn't allocate the
>> + * tracing_map_elts themselves, or provide a means for describing the
>> + * keys or sums associated with the tracing_map_elts. All
>> + * tracing_map_elts for a given map have the same set of sums and
>> + * keys, which are defined by the client using the functions
>> + * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
>> + * the fields are defined, the pool of elements allocated for the map
>> + * can be created, which occurs when the client code calls
>> + * tracing_map_init().
>> + *
>> + * When tracing_map_init() returns, tracing_map_elt elements can be
>> + * inserted into the map using tracing_map_insert(). When called,
>> + * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
>> + * finds an existing match in the map and in either case returns it.
>> + * The client can then use tracing_map_update_sum() and
>> + * tracing_map_read_sum() to update or read a given sum field for the
>> + * tracing_map_elt.
>> + *
>> + * The client can at any point retrieve and traverse the current set
>> + * of inserted tracing_map_elts in a tracing_map, via
>> + * tracing_map_sort_entries(). Sorting can be done on any field,
>> + * including keys.
>> + *
>> + * See tracing_map.h for a description of tracing_map_ops.
>> + *
>> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
>> + */
>> +struct tracing_map *tracing_map_create(unsigned int map_bits,
>> + unsigned int key_size,
>> + struct tracing_map_ops *ops,
>> + void *private_data)
>> +{
>> + struct tracing_map *map;
>> + unsigned int i;
>> +
>> + if (map_bits < TRACING_MAP_BITS_MIN ||
>> + map_bits > TRACING_MAP_BITS_MAX)
>> + return ERR_PTR(-EINVAL);
>> +
>> + map = kzalloc(sizeof(*map), GFP_KERNEL);
>> + if (!map)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + map->map_bits = map_bits;
>> + map->max_elts = (1 << map_bits);
>> + atomic_set(&map->next_elt, -1);
>> +
>> + map->map_size = (1 << (map_bits + 1));
>> + map->ops = ops;
>> +
>> + map->private_data = private_data;
>> +
>> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
>> + GFP_KERNEL);
>> + if (!map->map)
>> + goto free;
>> +
>> + map->key_size = key_size;
>> + for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
>> + map->key_idx[i] = -1;
>> + out:
>> + return map;
>> + free:
>> + tracing_map_destroy(map);
>> + map = ERR_PTR(-ENOMEM);
>> +
>> + goto out;
>> +}
>> +
>> +/**
>> + * tracing_map_init - Allocate and clear a map's tracing_map_elts
>> + * @map: The tracing_map to initialize
>> + *
>> + * Allocates a clears a pool of tracing_map_elts equal to the
>> + * user-specified size of 2 ** map_bits (internally maintained as
>> + * 'max_elts' in struct tracing_map). Before using, the map fields
>> + * should be added to the map with tracing_map_add_sum_field() and
>> + * tracing_map_add_key_field(). tracing_map_init() should then be
>> + * called to allocate the array of tracing_map_elts, in order to avoid
>> + * allocating anything in the map insertion path. The user-specified
>> + * map size reflects the max number of elements requested by the user
>> + * - internally we double that in order to keep the table sparse and
>> + * keep collisions manageable.
>> + *
>> + * See tracing_map.h for a description of tracing_map_ops.
>> + *
>> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
>> + */
>> +int tracing_map_init(struct tracing_map *map)
>> +{
>> + int err;
>> +
>> + if (map->n_fields < 2)
>> + return -EINVAL; /* need at least 1 key and 1 val */
>> +
>> + err = tracing_map_alloc_elts(map);
>> + if (err)
>> + return err;
>> +
>> + tracing_map_clear(map);
>> +
>> + return err;
>> +}
>> +
>> +static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
>> + const struct tracing_map_sort_entry **b)
>> +{
>> + int ret = 0;
>> +
>> + if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
>> + ret = 1;
>> +
>> + return ret;
>> +}
>> +
>> +static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
>> + const struct tracing_map_sort_entry **b)
>> +{
>> + const struct tracing_map_elt *elt_a, *elt_b;
>> + struct tracing_map_sort_key *sort_key;
>> + struct tracing_map_field *field;
>> + tracing_map_cmp_fn_t cmp_fn;
>> + void *val_a, *val_b;
>> + int ret = 0;
>> +
>> + elt_a = (*a)->elt;
>> + elt_b = (*b)->elt;
>> +
>> + sort_key = &elt_a->map->sort_key;
>> +
>> + field = &elt_a->fields[sort_key->field_idx];
>> + cmp_fn = field->cmp_fn;
>> +
>> + val_a = &elt_a->fields[sort_key->field_idx].sum;
>> + val_b = &elt_b->fields[sort_key->field_idx].sum;
>> +
>> + ret = cmp_fn(val_a, val_b);
>> + if (sort_key->descending)
>> + ret = -ret;
>> +
>> + return ret;
>> +}
>> +
>> +static int cmp_entries_key(const struct tracing_map_sort_entry **a,
>> + const struct tracing_map_sort_entry **b)
>> +{
>> + const struct tracing_map_elt *elt_a, *elt_b;
>> + struct tracing_map_sort_key *sort_key;
>> + struct tracing_map_field *field;
>> + tracing_map_cmp_fn_t cmp_fn;
>> + void *val_a, *val_b;
>> + int ret = 0;
>> +
>> + elt_a = (*a)->elt;
>> + elt_b = (*b)->elt;
>> +
>> + sort_key = &elt_a->map->sort_key;
>> +
>> + field = &elt_a->fields[sort_key->field_idx];
>> +
>> + cmp_fn = field->cmp_fn;
>> +
>> + val_a = elt_a->key + field->offset;
>> + val_b = elt_b->key + field->offset;
>> +
>> + ret = cmp_fn(val_a, val_b);
>> + if (sort_key->descending)
>> + ret = -ret;
>> +
>> + return ret;
>> +}
>> +
>> +static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
>> +{
>> + if (!entry)
>> + return;
>> +
>> + if (entry->elt_copied)
>> + tracing_map_elt_free(entry->elt);
>> +
>> + kfree(entry);
>> +}
>> +
>> +/**
>> + * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
>> + * @entries: The entries to destroy
>> + * @n_entries: The number of entries in the array
>> + *
>> + * Destroy the elements returned by a tracing_map_sort_entries() call.
>> + */
>> +void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
>> + unsigned int n_entries)
>> +{
>> + unsigned int i;
>> +
>> + for (i = 0; i < n_entries; i++)
>> + destroy_sort_entry(entries[i]);
>> +}
>> +
>> +static struct tracing_map_sort_entry *
>> +create_sort_entry(void *key, struct tracing_map_elt *elt)
>> +{
>> + struct tracing_map_sort_entry *sort_entry;
>> +
>> + sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
>> + if (!sort_entry)
>> + return NULL;
>> +
>> + sort_entry->key = key;
>> + sort_entry->elt = elt;
>> +
>> + return sort_entry;
>> +}
>> +
>> +static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
>> +{
>> + struct tracing_map_elt *dup_elt;
>> + unsigned int i;
>> +
>> + dup_elt = tracing_map_elt_alloc(elt->map);
>> + if (!dup_elt)
>> + return NULL;
>> +
>> + if (elt->map->ops && elt->map->ops->elt_copy)
>> + elt->map->ops->elt_copy(dup_elt, elt);
>> +
>> + dup_elt->private_data = elt->private_data;
>> + memcpy(dup_elt->key, elt->key, elt->map->key_size);
>> +
>> + for (i = 0; i < elt->map->n_fields; i++) {
>> + atomic64_set(&dup_elt->fields[i].sum,
>> + atomic64_read(&elt->fields[i].sum));
>> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
>> + }
>> +
>> + return dup_elt;
>> +}
>> +
>> +static int merge_dup(struct tracing_map_sort_entry **sort_entries,
>> + unsigned int target, unsigned int dup)
>> +{
>> + struct tracing_map_elt *target_elt, *elt;
>> + bool first_dup = (target - dup) == 1;
>> + int i;
>> +
>> + if (first_dup) {
>> + elt = sort_entries[target]->elt;
>> + target_elt = copy_elt(elt);
>> + if (!target_elt)
>> + return -ENOMEM;
>> + sort_entries[target]->elt = target_elt;
>> + sort_entries[target]->elt_copied = true;
>> + } else
>> + target_elt = sort_entries[target]->elt;
>> +
>> + elt = sort_entries[dup]->elt;
>> +
>> + for (i = 0; i < elt->map->n_fields; i++)
>> + atomic64_add(atomic64_read(&elt->fields[i].sum),
>> + &target_elt->fields[i].sum);
>> +
>> + sort_entries[dup]->dup = true;
>> +
>> + return 0;
>> +}
>> +
>> +static int merge_dups(struct tracing_map_sort_entry **sort_entries,
>> + int n_entries, unsigned int key_size)
>> +{
>> + unsigned int dups = 0, total_dups = 0;
>> + int err, i, j;
>> + void *key;
>> +
>> + if (n_entries < 2)
>> + return total_dups;
>> +
>> + sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
>> + (int (*)(const void *, const void *))cmp_entries_dup, NULL);
>> +
>> + key = sort_entries[0]->key;
>> + for (i = 1; i < n_entries; i++) {
>> + if (!memcmp(sort_entries[i]->key, key, key_size)) {
>> + dups++; total_dups++;
>> + err = merge_dup(sort_entries, i - dups, i);
>> + if (err)
>> + return err;
>> + continue;
>> + }
>> + key = sort_entries[i]->key;
>> + dups = 0;
>> + }
>> +
>> + if (!total_dups)
>> + return total_dups;
>> +
>> + for (i = 0, j = 0; i < n_entries; i++) {
>> + if (!sort_entries[i]->dup) {
>> + sort_entries[j] = sort_entries[i];
>> + if (j++ != i)
>> + sort_entries[i] = NULL;
>> + } else {
>> + destroy_sort_entry(sort_entries[i]);
>> + sort_entries[i] = NULL;
>> + }
>> + }
>> +
>> + return total_dups;
>> +}
>> +
>> +static bool is_key(struct tracing_map *map, unsigned int field_idx)
>> +{
>> + unsigned int i;
>> +
>> + for (i = 0; i < map->n_keys; i++)
>> + if (map->key_idx[i] == field_idx)
>> + return true;
>> + return false;
>> +}
>> +
>> +static void sort_secondary(struct tracing_map *map,
>> + const struct tracing_map_sort_entry **entries,
>> + unsigned int n_entries,
>> + struct tracing_map_sort_key *primary_key,
>> + struct tracing_map_sort_key *secondary_key)
>> +{
>> + int (*primary_fn)(const struct tracing_map_sort_entry **,
>> + const struct tracing_map_sort_entry **);
>> + int (*secondary_fn)(const struct tracing_map_sort_entry **,
>> + const struct tracing_map_sort_entry **);
>> + unsigned i, start = 0, n_sub = 1;
>> +
>> + if (is_key(map, primary_key->field_idx))
>> + primary_fn = cmp_entries_key;
>> + else
>> + primary_fn = cmp_entries_sum;
>> +
>> + if (is_key(map, secondary_key->field_idx))
>> + secondary_fn = cmp_entries_key;
>> + else
>> + secondary_fn = cmp_entries_sum;
>> +
>> + for (i = 0; i < n_entries - 1; i++) {
>> + const struct tracing_map_sort_entry **a = &entries[i];
>> + const struct tracing_map_sort_entry **b = &entries[i + 1];
>> +
>> + if (primary_fn(a, b) == 0) {
>> + n_sub++;
>> + if (i < n_entries - 2)
>> + continue;
>> + }
>> +
>> + if (n_sub < 2) {
>> + start = i + 1;
>> + n_sub = 1;
>> + continue;
>> + }
>> +
>> + set_sort_key(map, secondary_key);
>> + sort(&entries[start], n_sub,
>> + sizeof(struct tracing_map_sort_entry *),
>> + (int (*)(const void *, const void *))secondary_fn, NULL);
>> + set_sort_key(map, primary_key);
>> +
>> + start = i + 1;
>> + n_sub = 1;
>> + }
>> +}
>> +
>> +/**
>> + * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
>> + * @map: The tracing_map
>> + * @sort_key: The sort key to use for sorting
>> + * @sort_entries: outval: pointer to allocated and sorted array of entries
>> + *
>> + * tracing_map_sort_entries() sorts the current set of entries in the
>> + * map and returns the list of tracing_map_sort_entries containing
>> + * them to the client in the sort_entries param. The client can
>> + * access the struct tracing_map_elt element of interest directly as
>> + * the 'elt' field of a returned struct tracing_map_sort_entry object.
>> + *
>> + * The sort_key has only two fields: idx and descending. 'idx' refers
>> + * to the index of the field added via tracing_map_add_sum_field() or
>> + * tracing_map_add_key_field() when the tracing_map was initialized.
>> + * 'descending' is a flag that if set reverses the sort order, which
>> + * by default is ascending.
>> + *
>> + * The client should not hold on to the returned array but should use
>> + * it and call tracing_map_destroy_sort_entries() when done.
>> + *
>> + * Return: the number of sort_entries in the struct tracing_map_sort_entry
>> + * array, negative on error
>> + */
>> +int tracing_map_sort_entries(struct tracing_map *map,
>> + struct tracing_map_sort_key *sort_keys,
>> + unsigned int n_sort_keys,
>> + struct tracing_map_sort_entry ***sort_entries)
>> +{
>> + int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
>> + const struct tracing_map_sort_entry **);
>> + struct tracing_map_sort_entry *sort_entry, **entries;
>> + int i, n_entries, ret;
>> +
>> + entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
>> + if (!entries)
>> + return -ENOMEM;
>> +
>> + for (i = 0, n_entries = 0; i < map->map_size; i++) {
>> + if (!map->map[i].key || !map->map[i].val)
>> + continue;
>> +
>> + entries[n_entries] = create_sort_entry(map->map[i].val->key,
>> + map->map[i].val);
>> + if (!entries[n_entries++]) {
>> + ret = -ENOMEM;
>> + goto free;
>> + }
>> + }
>> +
>> + if (n_entries == 0) {
>> + ret = 0;
>> + goto free;
>> + }
>> +
>> + if (n_entries == 1) {
>> + *sort_entries = entries;
>> + return 1;
>> + }
>> +
>> + ret = merge_dups(entries, n_entries, map->key_size);
>> + if (ret < 0)
>> + goto free;
>> + n_entries -= ret;
>> +
>> + if (is_key(map, sort_keys[0].field_idx))
>> + cmp_entries_fn = cmp_entries_key;
>> + else
>> + cmp_entries_fn = cmp_entries_sum;
>> +
>> + set_sort_key(map, &sort_keys[0]);
>> +
>> + sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
>> + (int (*)(const void *, const void *))cmp_entries_fn, NULL);
>> +
>> + if (n_sort_keys > 1)
>> + sort_secondary(map,
>> + (const struct tracing_map_sort_entry **)entries,
>> + n_entries,
>> + &sort_keys[0],
>> + &sort_keys[1]);
>> +
>> + *sort_entries = entries;
>> +
>> + return n_entries;
>> + free:
>> + tracing_map_destroy_sort_entries(entries, n_entries);
>> +
>> + return ret;
>> +}
>> diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
>> new file mode 100644
>> index 0000000..2e63c5c
>> --- /dev/null
>> +++ b/kernel/trace/tracing_map.h
>> @@ -0,0 +1,258 @@
>> +#ifndef __TRACING_MAP_H
>> +#define __TRACING_MAP_H
>> +
>> +#define TRACING_MAP_BITS_DEFAULT 11
>> +#define TRACING_MAP_BITS_MAX 17
>> +#define TRACING_MAP_BITS_MIN 7
>> +
>> +#define TRACING_MAP_FIELDS_MAX 4
>> +#define TRACING_MAP_KEYS_MAX 2
>> +
>> +#define TRACING_MAP_SORT_KEYS_MAX 2
>> +
>> +typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
>> +
>> +/*
>> + * This is an overview of the tracing_map data structures and how they
>> + * relate to the tracing_map API. The details of the algorithms
>> + * aren't discussed here - this is just a general overview of the data
>> + * structures and how they interact with the API.
>> + *
>> + * The central data structure of the tracing_map is an initially
>> + * zeroed array of struct tracing_map_entry (stored in the map field
>> + * of struct tracing_map). tracing_map_entry is a very simple data
>> + * structure containing only two fields: a 32-bit unsigned 'key'
>> + * variable and a pointer named 'val'. This array of struct
>> + * tracing_map_entry is essentially a hash table which will be
>> + * modified by a single function, tracing_map_insert(), but which can
>> + * be traversed and read by a user at any time (though the user does
>> + * this indirectly via an array of tracing_map_sort_entry - see the
>> + * explanation of that data structure in the discussion of the
>> + * sorting-related data structures below).
>> + *
>> + * The central function of the tracing_map API is
>> + * tracing_map_insert(). tracing_map_insert() hashes the
>> + * arbitrarily-sized key passed into it into a 32-bit unsigned key.
>> + * It then uses this key, truncated to the array size, as an index
>> + * into the array of tracing_map_entries. If the value of the 'key'
>> + * field of the tracing_map_entry found at that location is 0, then
>> + * that entry is considered to be free and can be claimed, by
>> + * replacing the 0 in the 'key' field of the tracing_map_entry with
>> + * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
>> + * 'val' field is then used to store a unique element which will be
>> + * forever associated with that 32-bit hashed key in the
>> + * tracing_map_entry.
>> + *
>> + * That unique element now in the tracing_map_entry's 'val' field is
>> + * an instance of tracing_map_elt, where 'elt' in the latter part of
>> + * that variable name is short for 'element'. The purpose of a
>> + * tracing_map_elt is to hold values specific to the the particular
>> + * 32-bit hashed key it's assocated with. Things such as the unique
>> + * set of aggregated sums associated with the 32-bit hashed key, along
>> + * with a copy of the full key associated with the entry, and which
>> + * was used to produce the 32-bit hashed key.
>> + *
>> + * When tracing_map_create() is called to create the tracing map, the
>> + * user specifies (indirectly via the map_bits param, the details are
>> + * unimportant for this discussion) the maximum number of elements
>> + * that the map can hold (stored in the max_elts field of struct
>> + * tracing_map). This is the maximum possible number of
>> + * tracing_map_entries in the tracing_map_entry array which can be
>> + * 'claimed' as described in the above discussion, and therefore is
>> + * also the maximum number of tracing_map_elts that can be associated
>> + * with the tracing_map_entry array in the tracing_map. Because of
>> + * the way the insertion algorithm works, the size of the allocated
>> + * tracing_map_entry array is always twice the maximum number of
>> + * elements (2 * max_elts). This value is stored in the map_size
>> + * field of struct tracing_map.
>> + *
>> + * Because tracing_map_insert() needs to work from any context,
>> + * including from within the memory allocation functions themselves,
>> + * both the tracing_map_entry array and a pool of max_elts
>> + * tracing_map_elts are pre-allocated before any call is made to
>> + * tracing_map_insert().
>> + *
>> + * The tracing_map_entry array is allocated as a single block by
>> + * tracing_map_create().
>> + *
>> + * Because the tracing_map_elts are much larger objects and can't
>> + * generally be allocated together as a single large array without
>> + * failure, they're allocated individually, by tracing_map_init().
>> + *
>> + * The pool of tracing_map_elts are allocated by tracing_map_init()
>> + * rather than by tracing_map_create() because at the time
>> + * tracing_map_create() is called, there isn't enough information to
>> + * create the tracing_map_elts. Specifically,the user first needs to
>> + * tell the tracing_map implementation how many fields the
>> + * tracing_map_elts contain, and which types of fields they are (key
>> + * or sum). The user does this via the tracing_map_add_sum_field()
>> + * and tracing_map_add_key_field() functions, following which the user
>> + * calls tracing_map_init() to finish up the tracing map setup. The
>> + * array holding the pointers which make up the pre-allocated pool of
>> + * tracing_map_elts is allocated as a single block and is stored in
>> + * the elts field of struct tracing_map.
>> + *
>> + * There is also a set of structures used for sorting that might
>> + * benefit from some minimal explanation.
>> + *
>> + * struct tracing_map_sort_key is used to drive the sort at any given
>> + * time. By 'any given time' we mean that a different
>> + * tracing_map_sort_key will be used at different times depending on
>> + * whether the sort currently being performed is a primary or a
>> + * secondary sort.
>> + *
>> + * The sort key is very simple, consisting of the field index of the
>> + * tracing_map_elt field to sort on (which the user saved when adding
>> + * the field), and whether the sort should be done in an ascending or
>> + * descending order.
>> + *
>> + * For the convenience of the sorting code, a tracing_map_sort_entry
>> + * is created for each tracing_map_elt, again individually allocated
>> + * to avoid failures that might be expected if allocated as a single
>> + * large array of struct tracing_map_sort_entry.
>> + * tracing_map_sort_entry instances are the objects expected by the
>> + * various internal sorting functions, and are also what the user
>> + * ultimately receives after calling tracing_map_sort_entries().
>> + * Because it doesn't make sense for users to access an unordered and
>> + * sparsely populated tracing_map directly, the
>> + * tracing_map_sort_entries() function is provided so that users can
>> + * retrieve a sorted list of all existing elements. In addition to
>> + * the associated tracing_map_elt 'elt' field contained within the
>> + * tracing_map_sort_entry, which is the object of interest to the
>> + * user, tracing_map_sort_entry objects contain a number of additional
>> + * fields which are used for caching and internal purposes and can
>> + * safely be ignored.
>> +*/
>> +
>> +struct tracing_map_field {
>> + tracing_map_cmp_fn_t cmp_fn;
>> + union {
>> + atomic64_t sum;
>> + unsigned int offset;
>> + };
>> +};
>> +
>> +struct tracing_map_elt {
>> + struct tracing_map *map;
>> + struct tracing_map_field *fields;
>> + void *key;
>> + void *private_data;
>> +};
>> +
>> +struct tracing_map_entry {
>> + u32 key;
>> + struct tracing_map_elt *val;
>> +};
>> +
>> +struct tracing_map_sort_key {
>> + unsigned int field_idx;
>> + bool descending;
>> +};
>> +
>> +struct tracing_map_sort_entry {
>> + void *key;
>> + struct tracing_map_elt *elt;
>> + bool elt_copied;
>> + bool dup;
>> +};
>> +
>> +struct tracing_map {
>> + unsigned int key_size;
>> + unsigned int map_bits;
>> + unsigned int map_size;
>> + unsigned int max_elts;
>> + atomic_t next_elt;
>> + struct tracing_map_elt **elts;
>> + struct tracing_map_entry *map;
>> + struct tracing_map_ops *ops;
>> + void *private_data;
>> + struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
>> + unsigned int n_fields;
>> + int key_idx[TRACING_MAP_KEYS_MAX];
>> + unsigned int n_keys;
>> + struct tracing_map_sort_key sort_key;
>> +};
>> +
>> +/**
>> + * struct tracing_map_ops - callbacks for tracing_map
>> + *
>> + * The methods in this structure define callback functions for various
>> + * operations on a tracing_map or objects related to a tracing_map.
>> + *
>> + * For a detailed description of tracing_map_elt objects please see
>> + * the overview of tracing_map data structures at the beginning of
>> + * this file.
>> + *
>> + * All the methods below are optional.
>> + *
>> + * @elt_alloc: When a tracing_map_elt is allocated, this function, if
>> + * defined, will be called and gives clients the opportunity to
>> + * allocate additional data and attach it to the element
>> + * (tracing_map_elt->private_data is meant for that purpose).
>> + * Element allocation occurs before tracing begins, when the
>> + * tracing_map_init() call is made by client code.
>> + *
>> + * @elt_copy: At certain points in the lifetime of an element, it may
>> + * need to be copied. The copy should include a copy of the
>> + * client-allocated data, which can be copied into the 'to'
>> + * element from the 'from' element.
>> + *
>> + * @elt_free: When a tracing_map_elt is freed, this function is called
>> + * and allows client-allocated per-element data to be freed.
>> + *
>> + * @elt_clear: This callback allows per-element client-defined data to
>> + * be cleared, if applicable.
>> + *
>> + * @elt_init: This callback allows per-element client-defined data to
>> + * be initialized when used i.e. when the element is actually
>> + * claimed by tracing_map_insert() in the context of the map
>> + * insertion.
>> + */
>> +struct tracing_map_ops {
>> + int (*elt_alloc)(struct tracing_map_elt *elt);
>> + void (*elt_copy)(struct tracing_map_elt *to,
>> + struct tracing_map_elt *from);
>> + void (*elt_free)(struct tracing_map_elt *elt);
>> + void (*elt_clear)(struct tracing_map_elt *elt);
>> + void (*elt_init)(struct tracing_map_elt *elt);
>> +};
>> +
>> +extern struct tracing_map *tracing_map_create(unsigned int map_bits,
>> + unsigned int key_size,
>> + struct tracing_map_ops *ops,
>> + void *private_data);
>> +extern int tracing_map_init(struct tracing_map *map);
>> +
>> +extern int tracing_map_add_sum_field(struct tracing_map *map);
>> +extern int tracing_map_add_key_field(struct tracing_map *map,
>> + unsigned int offset,
>> + tracing_map_cmp_fn_t cmp_fn);
>> +
>> +extern void tracing_map_destroy(struct tracing_map *map);
>> +extern void tracing_map_clear(struct tracing_map *map);
>> +
>> +extern struct tracing_map_elt *
>> +tracing_map_insert(struct tracing_map *map, void *key);
>> +
>> +extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
>> + int field_is_signed);
>> +extern int tracing_map_cmp_string(void *val_a, void *val_b);
>> +extern int tracing_map_cmp_none(void *val_a, void *val_b);
>> +
>> +extern void tracing_map_update_sum(struct tracing_map_elt *elt,
>> + unsigned int i, u64 n);
>> +extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
>> +extern void tracing_map_set_field_descr(struct tracing_map *map,
>> + unsigned int i,
>> + unsigned int key_offset,
>> + tracing_map_cmp_fn_t cmp_fn);
>> +extern int
>> +tracing_map_sort_entries(struct tracing_map *map,
>> + struct tracing_map_sort_key *sort_keys,
>> + unsigned int n_sort_keys,
>> + struct tracing_map_sort_entry ***sort_entries);
>> +
>> +extern void
>> +tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
>> + unsigned int n_entries);
>> +#endif /* __TRACING_MAP_H */
>> --
>> 1.9.3
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2015-07-17 01:35:31

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

Hi Mathieu,

On Thu, 2015-07-16 at 23:25 +0000, Mathieu Desnoyers wrote:
> * Tom Zanussi wrote:
> >> Add tracing_map, a special-purpose lock-free map for tracing.
> >>
> >> tracing_map is designed to aggregate or 'sum' one or more values
> >> associated with a specific object of type tracing_map_elt, which
> >> is associated by the map to a given key.
> >>
> >> It provides various hooks allowing per-tracer customization and is
> >> separated out into a separate file in order to allow it to be shared
> >> between multiple tracers, but isn't meant to be generally used outside
> >> of that context.
> >>
> >> The tracing_map implementation was inspired by lock-free map
> >> algorithms originated by Dr. Cliff Click:
> >>
> >> http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
> >> http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
>
> Hi Tom,
>
> First question: what is the rationale for implementing another
> hash table from scratch here ? What is missing in the pre-existing
> hash table implementations ?
>

None of the other hash tables allow for lock-free insertion (and I
didn't see an easy way to add it).

> Moreover, you might want to handle the case where jhash() returns
> 0. AFAIU, there is a race on "insert" in this scenario.
>

You're right, in that case you'd accidentally overwrite an already
claimed slot. Thanks for pointing that out.

Tom

> Thanks,
>
> Mathieu
>
> >>
> >> Signed-off-by: Tom Zanussi <[email protected]>
> >> ---
> >> kernel/trace/Makefile | 1 +
> >> kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
> >> kernel/trace/tracing_map.h | 258 +++++++++++++
> >> 3 files changed, 1194 insertions(+)
> >> create mode 100644 kernel/trace/tracing_map.c
> >> create mode 100644 kernel/trace/tracing_map.h
> >>
> >> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
> >> index 9b1044e..3b26cfb 100644
> >> --- a/kernel/trace/Makefile
> >> +++ b/kernel/trace/Makefile
> >> @@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
> >> obj-$(CONFIG_TRACING) += trace_seq.o
> >> obj-$(CONFIG_TRACING) += trace_stat.o
> >> obj-$(CONFIG_TRACING) += trace_printk.o
> >> +obj-$(CONFIG_TRACING) += tracing_map.o
> >> obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
> >> obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
> >> obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
> >> diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
> >> new file mode 100644
> >> index 0000000..a505025
> >> --- /dev/null
> >> +++ b/kernel/trace/tracing_map.c
> >> @@ -0,0 +1,935 @@
> >> +/*
> >> + * tracing_map - lock-free map for tracing
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License as published by
> >> + * the Free Software Foundation; either version 2 of the License, or
> >> + * (at your option) any later version.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> + * GNU General Public License for more details.
> >> + *
> >> + * Copyright (C) 2015 Tom Zanussi <[email protected]>
> >> + *
> >> + * tracing_map implementation inspired by lock-free map algorithms
> >> + * originated by Dr. Cliff Click:
> >> + *
> >> + * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
> >> + * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
> >> + */
> >> +
> >> +#include <linux/slab.h>
> >> +#include <linux/jhash.h>
> >> +#include <linux/sort.h>
> >> +
> >> +#include "tracing_map.h"
> >> +#include "trace.h"
> >> +
> >> +/*
> >> + * NOTE: For a detailed description of the data structures used by
> >> + * these functions (such as tracing_map_elt) please see the overview
> >> + * of tracing_map data structures at the beginning of tracing_map.h.
> >> + */
> >> +
> >> +/**
> >> + * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
> >> + * @elt: The tracing_map_elt
> >> + * @i: The index of the given sum associated with the tracing_map_elt
> >> + * @n: The value to add to the sum
> >> + *
> >> + * Add n to sum i associated with the specified tracing_map_elt
> >> + * instance. The index i is the index returned by the call to
> >> + * tracing_map_add_sum_field() when the tracing map was set up.
> >> + */
> >> +void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
> >> +{
> >> + atomic64_add(n, &elt->fields[i].sum);
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
> >> + * @elt: The tracing_map_elt
> >> + * @i: The index of the given sum associated with the tracing_map_elt
> >> + *
> >> + * Retrieve the value of the sum i associated with the specified
> >> + * tracing_map_elt instance. The index i is the index returned by the
> >> + * call to tracing_map_add_sum_field() when the tracing map was set
> >> + * up.
> >> + *
> >> + * Return: The sum associated with field i for elt.
> >> + */
> >> +u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
> >> +{
> >> + return (u64)atomic64_read(&elt->fields[i].sum);
> >> +}
> >> +
> >> +int tracing_map_cmp_string(void *val_a, void *val_b)
> >> +{
> >> + char *a = val_a;
> >> + char *b = val_b;
> >> +
> >> + return strcmp(a, b);
> >> +}
> >> +
> >> +int tracing_map_cmp_none(void *val_a, void *val_b)
> >> +{
> >> + return 0;
> >> +}
> >> +
> >> +static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
> >> +{
> >> + u64 a = atomic64_read((atomic64_t *)val_a);
> >> + u64 b = atomic64_read((atomic64_t *)val_b);
> >> +
> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0);
> >> +}
> >> +
> >> +#define DEFINE_TRACING_MAP_CMP_FN(type) \
> >> +static int tracing_map_cmp_##type(void *val_a, void *val_b) \
> >> +{ \
> >> + type a = *(type *)val_a; \
> >> + type b = *(type *)val_b; \
> >> + \
> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0); \
> >> +}
> >> +
> >> +DEFINE_TRACING_MAP_CMP_FN(s64);
> >> +DEFINE_TRACING_MAP_CMP_FN(u64);
> >> +DEFINE_TRACING_MAP_CMP_FN(s32);
> >> +DEFINE_TRACING_MAP_CMP_FN(u32);
> >> +DEFINE_TRACING_MAP_CMP_FN(s16);
> >> +DEFINE_TRACING_MAP_CMP_FN(u16);
> >> +DEFINE_TRACING_MAP_CMP_FN(s8);
> >> +DEFINE_TRACING_MAP_CMP_FN(u8);
> >> +
> >> +tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
> >> + int field_is_signed)
> >> +{
> >> + tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
> >> +
> >> + switch (field_size) {
> >> + case 8:
> >> + if (field_is_signed)
> >> + fn = tracing_map_cmp_s64;
> >> + else
> >> + fn = tracing_map_cmp_u64;
> >> + break;
> >> + case 4:
> >> + if (field_is_signed)
> >> + fn = tracing_map_cmp_s32;
> >> + else
> >> + fn = tracing_map_cmp_u32;
> >> + break;
> >> + case 2:
> >> + if (field_is_signed)
> >> + fn = tracing_map_cmp_s16;
> >> + else
> >> + fn = tracing_map_cmp_u16;
> >> + break;
> >> + case 1:
> >> + if (field_is_signed)
> >> + fn = tracing_map_cmp_s8;
> >> + else
> >> + fn = tracing_map_cmp_u8;
> >> + break;
> >> + }
> >> +
> >> + return fn;
> >> +}
> >> +
> >> +static int tracing_map_add_field(struct tracing_map *map,
> >> + tracing_map_cmp_fn_t cmp_fn)
> >> +{
> >> + int ret = -EINVAL;
> >> +
> >> + if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
> >> + ret = map->n_fields;
> >> + map->fields[map->n_fields++].cmp_fn = cmp_fn;
> >> + }
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_add_sum_field - Add a field describing a tracing_map sum
> >> + * @map: The tracing_map
> >> + *
> >> + * Add a sum field to the key and return the index identifying it in
> >> + * the map and associated tracing_map_elts. This is the index used
> >> + * for instance to update a sum for a particular tracing_map_elt using
> >> + * tracing_map_update_sum() or reading it via tracing_map_read_sum().
> >> + *
> >> + * Return: The index identifying the field in the map and associated
> >> + * tracing_map_elts.
> >> + */
> >> +int tracing_map_add_sum_field(struct tracing_map *map)
> >> +{
> >> + return tracing_map_add_field(map, tracing_map_cmp_atomic64);
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_add_key_field - Add a field describing a tracing_map key
> >> + * @map: The tracing_map
> >> + * @offset: The offset within the key
> >> + * @cmp_fn: The comparison function that will be used to sort on the key
> >> + *
> >> + * Let the map know there is a key and that if it's used as a sort key
> >> + * to use cmp_fn.
> >> + *
> >> + * A key can be a subset of a compound key; for that purpose, the
> >> + * offset param is used to describe where within the the compound key
> >> + * the key referenced by this key field resides.
> >> + *
> >> + * Return: The index identifying the field in the map and associated
> >> + * tracing_map_elts.
> >> + */
> >> +int tracing_map_add_key_field(struct tracing_map *map,
> >> + unsigned int offset,
> >> + tracing_map_cmp_fn_t cmp_fn)
> >> +
> >> +{
> >> + int idx = tracing_map_add_field(map, cmp_fn);
> >> +
> >> + if (idx < 0)
> >> + return idx;
> >> +
> >> + map->fields[idx].offset = offset;
> >> +
> >> + map->key_idx[map->n_keys++] = idx;
> >> +
> >> + return idx;
> >> +}
> >> +
> >> +static void tracing_map_elt_clear(struct tracing_map_elt *elt)
> >> +{
> >> + unsigned i;
> >> +
> >> + for (i = 0; i < elt->map->n_fields; i++)
> >> + if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
> >> + atomic64_set(&elt->fields[i].sum, 0);
> >> +
> >> + if (elt->map->ops && elt->map->ops->elt_clear)
> >> + elt->map->ops->elt_clear(elt);
> >> +}
> >> +
> >> +static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
> >> +{
> >> + unsigned int i;
> >> +
> >> + tracing_map_elt_clear(elt);
> >> +
> >> + for (i = 0; i < elt->map->n_fields; i++) {
> >> + elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
> >> +
> >> + if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
> >> + elt->fields[i].offset = elt->map->fields[i].offset;
> >> + }
> >> +}
> >> +
> >> +static void tracing_map_elt_free(struct tracing_map_elt *elt)
> >> +{
> >> + if (!elt)
> >> + return;
> >> +
> >> + if (elt->map->ops && elt->map->ops->elt_free)
> >> + elt->map->ops->elt_free(elt);
> >> + kfree(elt->fields);
> >> + kfree(elt->key);
> >> + kfree(elt);
> >> +}
> >> +
> >> +static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
> >> +{
> >> + struct tracing_map_elt *elt;
> >> + int err = 0;
> >> +
> >> + elt = kzalloc(sizeof(*elt), GFP_KERNEL);
> >> + if (!elt)
> >> + return ERR_PTR(-ENOMEM);
> >> +
> >> + elt->map = map;
> >> +
> >> + elt->key = kzalloc(map->key_size, GFP_KERNEL);
> >> + if (!elt->key) {
> >> + err = -ENOMEM;
> >> + goto free;
> >> + }
> >> +
> >> + elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
> >> + if (!elt->fields) {
> >> + err = -ENOMEM;
> >> + goto free;
> >> + }
> >> +
> >> + tracing_map_elt_init_fields(elt);
> >> +
> >> + if (map->ops && map->ops->elt_alloc) {
> >> + err = map->ops->elt_alloc(elt);
> >> + if (err)
> >> + goto free;
> >> + }
> >> + return elt;
> >> + free:
> >> + tracing_map_elt_free(elt);
> >> +
> >> + return ERR_PTR(err);
> >> +}
> >> +
> >> +static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
> >> +{
> >> + struct tracing_map_elt *elt = NULL;
> >> + int idx;
> >> +
> >> + idx = atomic_inc_return(&map->next_elt);
> >> + if (idx < map->max_elts) {
> >> + elt = map->elts[idx];
> >> + if (map->ops && map->ops->elt_init)
> >> + map->ops->elt_init(elt);
> >> + }
> >> +
> >> + return elt;
> >> +}
> >> +
> >> +static void tracing_map_free_elts(struct tracing_map *map)
> >> +{
> >> + unsigned int i;
> >> +
> >> + if (!map->elts)
> >> + return;
> >> +
> >> + for (i = 0; i < map->max_elts; i++)
> >> + tracing_map_elt_free(map->elts[i]);
> >> +
> >> + kfree(map->elts);
> >> +}
> >> +
> >> +static int tracing_map_alloc_elts(struct tracing_map *map)
> >> +{
> >> + unsigned int i;
> >> +
> >> + map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
> >> + GFP_KERNEL);
> >> + if (!map->elts)
> >> + return -ENOMEM;
> >> +
> >> + for (i = 0; i < map->max_elts; i++) {
> >> + map->elts[i] = tracing_map_elt_alloc(map);
> >> + if (!map->elts[i]) {
> >> + tracing_map_free_elts(map);
> >> +
> >> + return -ENOMEM;
> >> + }
> >> + }
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static inline bool keys_match(void *key, void *test_key, unsigned key_size)
> >> +{
> >> + bool match = true;
> >> +
> >> + if (memcmp(key, test_key, key_size))
> >> + match = false;
> >> +
> >> + return match;
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
> >> + * @map: The tracing_map to insert into
> >> + * @key: The key to insert
> >> + *
> >> + * Inserts a key into a tracing_map and creates and returns a new
> >> + * tracing_map_elt for it, or if the key has already been inserted by
> >> + * a previous call, returns the tracing_map_elt already associated
> >> + * with it. When the map was created, the number of elements to be
> >> + * allocated for the map was specified (internally maintained as
> >> + * 'max_elts' in struct tracing_map), and that number of
> >> + * tracing_map_elts was created by tracing_map_init(). This is the
> >> + * pre-allocated pool of tracing_map_elts that tracing_map_insert()
> >> + * will allocate from when adding new keys. Once that pool is
> >> + * exhausted, tracing_map_insert() is useless and will return NULL to
> >> + * signal that state.
> >> + *
> >> + * This is a lock-free tracing map insertion function implementing a
> >> + * modified form of Cliff Click's basic insertion algorithm. It
> >> + * requires the table size be a power of two. To prevent any
> >> + * possibility of an infinite loop we always make the internal table
> >> + * size double the size of the requested table size (max_elts * 2).
> >> + * Likewise, we never reuse a slot or resize or delete elements - when
> >> + * we've reached max_elts entries, we simply return NULL once we've
> >> + * run out of entries. Readers can at any point in time traverse the
> >> + * tracing map and safely access the key/val pairs.
> >> + *
> >> + * Return: the tracing_map_elt pointer val associated with the key.
> >> + * If this was a newly inserted key, the val will be a newly allocated
> >> + * and associated tracing_map_elt pointer val. If the key wasn't
> >> + * found and the pool of tracing_map_elts has been exhausted, NULL is
> >> + * returned and no further insertions will succeed.
> >> + */
> >> +struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
> >> +{
> >> + u32 idx, key_hash, test_key;
> >> +
> >> + key_hash = jhash(key, map->key_size, 0);
> >> + idx = key_hash >> (32 - (map->map_bits + 1));
> >> +
> >> + while (1) {
> >> + idx &= (map->map_size - 1);
> >> + test_key = map->map[idx].key;
> >> +
> >> + if (test_key && test_key == key_hash && map->map[idx].val &&
> >> + keys_match(key, map->map[idx].val->key, map->key_size))
> >> + return map->map[idx].val;
> >> +
> >> + if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
> >> + struct tracing_map_elt *elt;
> >> +
> >> + elt = get_free_elt(map);
> >> + if (!elt)
> >> + break;
> >> + memcpy(elt->key, key, map->key_size);
> >> + map->map[idx].val = elt;
> >> +
> >> + return map->map[idx].val;
> >> + }
> >> + idx++;
> >> + }
> >> +
> >> + return NULL;
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_destroy - Destroy a tracing_map
> >> + * @map: The tracing_map to destroy
> >> + *
> >> + * Frees a tracing_map along with its associated array of
> >> + * tracing_map_elts.
> >> + *
> >> + * Callers should make sure there are no readers or writers actively
> >> + * reading or inserting into the map before calling this.
> >> + */
> >> +void tracing_map_destroy(struct tracing_map *map)
> >> +{
> >> + if (!map)
> >> + return;
> >> +
> >> + tracing_map_free_elts(map);
> >> +
> >> + kfree(map->map);
> >> + kfree(map);
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_clear - Clear a tracing_map
> >> + * @map: The tracing_map to clear
> >> + *
> >> + * Resets the tracing map to a cleared or initial state. The
> >> + * tracing_map_elts are all cleared, and the array of struct
> >> + * tracing_map_entry is reset to an initialized state.
> >> + *
> >> + * Callers should make sure there are no writers actively inserting
> >> + * into the map before calling this.
> >> + */
> >> +void tracing_map_clear(struct tracing_map *map)
> >> +{
> >> + unsigned int i, size;
> >> +
> >> + atomic_set(&map->next_elt, -1);
> >> +
> >> + size = map->map_size * sizeof(struct tracing_map_entry);
> >> + memset(map->map, 0, size);
> >> +
> >> + for (i = 0; i < map->max_elts; i++)
> >> + tracing_map_elt_clear(map->elts[i]);
> >> +}
> >> +
> >> +static void set_sort_key(struct tracing_map *map,
> >> + struct tracing_map_sort_key *sort_key)
> >> +{
> >> + map->sort_key = *sort_key;
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_create - Create a lock-free map and element pool
> >> + * @map_bits: The size of the map (2 ** map_bits)
> >> + * @key_size: The size of the key for the map in bytes
> >> + * @ops: Optional client-defined tracing_map_ops instance
> >> + * @private_data: Client data associated with the map
> >> + *
> >> + * Creates and sets up a map to contain 2 ** map_bits number of
> >> + * elements (internally maintained as 'max_elts' in struct
> >> + * tracing_map). Before using, map fields should be added to the map
> >> + * with tracing_map_add_sum_field() and tracing_map_add_key_field().
> >> + * tracing_map_init() should then be called to allocate the array of
> >> + * tracing_map_elts, in order to avoid allocating anything in the map
> >> + * insertion path. The user-specified map size reflects the maximum
> >> + * number of elements that can be contained in the table requested by
> >> + * the user - internally we double that in order to keep the table
> >> + * sparse and keep collisions manageable.
> >> + *
> >> + * A tracing_map is a special-purpose map designed to aggregate or
> >> + * 'sum' one or more values associated with a specific object of type
> >> + * tracing_map_elt, which is attached by the map to a given key.
> >> + *
> >> + * tracing_map_create() sets up the map itself, and provides
> >> + * operations for inserting tracing_map_elts, but doesn't allocate the
> >> + * tracing_map_elts themselves, or provide a means for describing the
> >> + * keys or sums associated with the tracing_map_elts. All
> >> + * tracing_map_elts for a given map have the same set of sums and
> >> + * keys, which are defined by the client using the functions
> >> + * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
> >> + * the fields are defined, the pool of elements allocated for the map
> >> + * can be created, which occurs when the client code calls
> >> + * tracing_map_init().
> >> + *
> >> + * When tracing_map_init() returns, tracing_map_elt elements can be
> >> + * inserted into the map using tracing_map_insert(). When called,
> >> + * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
> >> + * finds an existing match in the map and in either case returns it.
> >> + * The client can then use tracing_map_update_sum() and
> >> + * tracing_map_read_sum() to update or read a given sum field for the
> >> + * tracing_map_elt.
> >> + *
> >> + * The client can at any point retrieve and traverse the current set
> >> + * of inserted tracing_map_elts in a tracing_map, via
> >> + * tracing_map_sort_entries(). Sorting can be done on any field,
> >> + * including keys.
> >> + *
> >> + * See tracing_map.h for a description of tracing_map_ops.
> >> + *
> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
> >> + */
> >> +struct tracing_map *tracing_map_create(unsigned int map_bits,
> >> + unsigned int key_size,
> >> + struct tracing_map_ops *ops,
> >> + void *private_data)
> >> +{
> >> + struct tracing_map *map;
> >> + unsigned int i;
> >> +
> >> + if (map_bits < TRACING_MAP_BITS_MIN ||
> >> + map_bits > TRACING_MAP_BITS_MAX)
> >> + return ERR_PTR(-EINVAL);
> >> +
> >> + map = kzalloc(sizeof(*map), GFP_KERNEL);
> >> + if (!map)
> >> + return ERR_PTR(-ENOMEM);
> >> +
> >> + map->map_bits = map_bits;
> >> + map->max_elts = (1 << map_bits);
> >> + atomic_set(&map->next_elt, -1);
> >> +
> >> + map->map_size = (1 << (map_bits + 1));
> >> + map->ops = ops;
> >> +
> >> + map->private_data = private_data;
> >> +
> >> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
> >> + GFP_KERNEL);
> >> + if (!map->map)
> >> + goto free;
> >> +
> >> + map->key_size = key_size;
> >> + for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
> >> + map->key_idx[i] = -1;
> >> + out:
> >> + return map;
> >> + free:
> >> + tracing_map_destroy(map);
> >> + map = ERR_PTR(-ENOMEM);
> >> +
> >> + goto out;
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_init - Allocate and clear a map's tracing_map_elts
> >> + * @map: The tracing_map to initialize
> >> + *
> >> + * Allocates a clears a pool of tracing_map_elts equal to the
> >> + * user-specified size of 2 ** map_bits (internally maintained as
> >> + * 'max_elts' in struct tracing_map). Before using, the map fields
> >> + * should be added to the map with tracing_map_add_sum_field() and
> >> + * tracing_map_add_key_field(). tracing_map_init() should then be
> >> + * called to allocate the array of tracing_map_elts, in order to avoid
> >> + * allocating anything in the map insertion path. The user-specified
> >> + * map size reflects the max number of elements requested by the user
> >> + * - internally we double that in order to keep the table sparse and
> >> + * keep collisions manageable.
> >> + *
> >> + * See tracing_map.h for a description of tracing_map_ops.
> >> + *
> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
> >> + */
> >> +int tracing_map_init(struct tracing_map *map)
> >> +{
> >> + int err;
> >> +
> >> + if (map->n_fields < 2)
> >> + return -EINVAL; /* need at least 1 key and 1 val */
> >> +
> >> + err = tracing_map_alloc_elts(map);
> >> + if (err)
> >> + return err;
> >> +
> >> + tracing_map_clear(map);
> >> +
> >> + return err;
> >> +}
> >> +
> >> +static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
> >> + const struct tracing_map_sort_entry **b)
> >> +{
> >> + int ret = 0;
> >> +
> >> + if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
> >> + ret = 1;
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
> >> + const struct tracing_map_sort_entry **b)
> >> +{
> >> + const struct tracing_map_elt *elt_a, *elt_b;
> >> + struct tracing_map_sort_key *sort_key;
> >> + struct tracing_map_field *field;
> >> + tracing_map_cmp_fn_t cmp_fn;
> >> + void *val_a, *val_b;
> >> + int ret = 0;
> >> +
> >> + elt_a = (*a)->elt;
> >> + elt_b = (*b)->elt;
> >> +
> >> + sort_key = &elt_a->map->sort_key;
> >> +
> >> + field = &elt_a->fields[sort_key->field_idx];
> >> + cmp_fn = field->cmp_fn;
> >> +
> >> + val_a = &elt_a->fields[sort_key->field_idx].sum;
> >> + val_b = &elt_b->fields[sort_key->field_idx].sum;
> >> +
> >> + ret = cmp_fn(val_a, val_b);
> >> + if (sort_key->descending)
> >> + ret = -ret;
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static int cmp_entries_key(const struct tracing_map_sort_entry **a,
> >> + const struct tracing_map_sort_entry **b)
> >> +{
> >> + const struct tracing_map_elt *elt_a, *elt_b;
> >> + struct tracing_map_sort_key *sort_key;
> >> + struct tracing_map_field *field;
> >> + tracing_map_cmp_fn_t cmp_fn;
> >> + void *val_a, *val_b;
> >> + int ret = 0;
> >> +
> >> + elt_a = (*a)->elt;
> >> + elt_b = (*b)->elt;
> >> +
> >> + sort_key = &elt_a->map->sort_key;
> >> +
> >> + field = &elt_a->fields[sort_key->field_idx];
> >> +
> >> + cmp_fn = field->cmp_fn;
> >> +
> >> + val_a = elt_a->key + field->offset;
> >> + val_b = elt_b->key + field->offset;
> >> +
> >> + ret = cmp_fn(val_a, val_b);
> >> + if (sort_key->descending)
> >> + ret = -ret;
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
> >> +{
> >> + if (!entry)
> >> + return;
> >> +
> >> + if (entry->elt_copied)
> >> + tracing_map_elt_free(entry->elt);
> >> +
> >> + kfree(entry);
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
> >> + * @entries: The entries to destroy
> >> + * @n_entries: The number of entries in the array
> >> + *
> >> + * Destroy the elements returned by a tracing_map_sort_entries() call.
> >> + */
> >> +void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
> >> + unsigned int n_entries)
> >> +{
> >> + unsigned int i;
> >> +
> >> + for (i = 0; i < n_entries; i++)
> >> + destroy_sort_entry(entries[i]);
> >> +}
> >> +
> >> +static struct tracing_map_sort_entry *
> >> +create_sort_entry(void *key, struct tracing_map_elt *elt)
> >> +{
> >> + struct tracing_map_sort_entry *sort_entry;
> >> +
> >> + sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
> >> + if (!sort_entry)
> >> + return NULL;
> >> +
> >> + sort_entry->key = key;
> >> + sort_entry->elt = elt;
> >> +
> >> + return sort_entry;
> >> +}
> >> +
> >> +static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
> >> +{
> >> + struct tracing_map_elt *dup_elt;
> >> + unsigned int i;
> >> +
> >> + dup_elt = tracing_map_elt_alloc(elt->map);
> >> + if (!dup_elt)
> >> + return NULL;
> >> +
> >> + if (elt->map->ops && elt->map->ops->elt_copy)
> >> + elt->map->ops->elt_copy(dup_elt, elt);
> >> +
> >> + dup_elt->private_data = elt->private_data;
> >> + memcpy(dup_elt->key, elt->key, elt->map->key_size);
> >> +
> >> + for (i = 0; i < elt->map->n_fields; i++) {
> >> + atomic64_set(&dup_elt->fields[i].sum,
> >> + atomic64_read(&elt->fields[i].sum));
> >> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> >> + }
> >> +
> >> + return dup_elt;
> >> +}
> >> +
> >> +static int merge_dup(struct tracing_map_sort_entry **sort_entries,
> >> + unsigned int target, unsigned int dup)
> >> +{
> >> + struct tracing_map_elt *target_elt, *elt;
> >> + bool first_dup = (target - dup) == 1;
> >> + int i;
> >> +
> >> + if (first_dup) {
> >> + elt = sort_entries[target]->elt;
> >> + target_elt = copy_elt(elt);
> >> + if (!target_elt)
> >> + return -ENOMEM;
> >> + sort_entries[target]->elt = target_elt;
> >> + sort_entries[target]->elt_copied = true;
> >> + } else
> >> + target_elt = sort_entries[target]->elt;
> >> +
> >> + elt = sort_entries[dup]->elt;
> >> +
> >> + for (i = 0; i < elt->map->n_fields; i++)
> >> + atomic64_add(atomic64_read(&elt->fields[i].sum),
> >> + &target_elt->fields[i].sum);
> >> +
> >> + sort_entries[dup]->dup = true;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int merge_dups(struct tracing_map_sort_entry **sort_entries,
> >> + int n_entries, unsigned int key_size)
> >> +{
> >> + unsigned int dups = 0, total_dups = 0;
> >> + int err, i, j;
> >> + void *key;
> >> +
> >> + if (n_entries < 2)
> >> + return total_dups;
> >> +
> >> + sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
> >> + (int (*)(const void *, const void *))cmp_entries_dup, NULL);
> >> +
> >> + key = sort_entries[0]->key;
> >> + for (i = 1; i < n_entries; i++) {
> >> + if (!memcmp(sort_entries[i]->key, key, key_size)) {
> >> + dups++; total_dups++;
> >> + err = merge_dup(sort_entries, i - dups, i);
> >> + if (err)
> >> + return err;
> >> + continue;
> >> + }
> >> + key = sort_entries[i]->key;
> >> + dups = 0;
> >> + }
> >> +
> >> + if (!total_dups)
> >> + return total_dups;
> >> +
> >> + for (i = 0, j = 0; i < n_entries; i++) {
> >> + if (!sort_entries[i]->dup) {
> >> + sort_entries[j] = sort_entries[i];
> >> + if (j++ != i)
> >> + sort_entries[i] = NULL;
> >> + } else {
> >> + destroy_sort_entry(sort_entries[i]);
> >> + sort_entries[i] = NULL;
> >> + }
> >> + }
> >> +
> >> + return total_dups;
> >> +}
> >> +
> >> +static bool is_key(struct tracing_map *map, unsigned int field_idx)
> >> +{
> >> + unsigned int i;
> >> +
> >> + for (i = 0; i < map->n_keys; i++)
> >> + if (map->key_idx[i] == field_idx)
> >> + return true;
> >> + return false;
> >> +}
> >> +
> >> +static void sort_secondary(struct tracing_map *map,
> >> + const struct tracing_map_sort_entry **entries,
> >> + unsigned int n_entries,
> >> + struct tracing_map_sort_key *primary_key,
> >> + struct tracing_map_sort_key *secondary_key)
> >> +{
> >> + int (*primary_fn)(const struct tracing_map_sort_entry **,
> >> + const struct tracing_map_sort_entry **);
> >> + int (*secondary_fn)(const struct tracing_map_sort_entry **,
> >> + const struct tracing_map_sort_entry **);
> >> + unsigned i, start = 0, n_sub = 1;
> >> +
> >> + if (is_key(map, primary_key->field_idx))
> >> + primary_fn = cmp_entries_key;
> >> + else
> >> + primary_fn = cmp_entries_sum;
> >> +
> >> + if (is_key(map, secondary_key->field_idx))
> >> + secondary_fn = cmp_entries_key;
> >> + else
> >> + secondary_fn = cmp_entries_sum;
> >> +
> >> + for (i = 0; i < n_entries - 1; i++) {
> >> + const struct tracing_map_sort_entry **a = &entries[i];
> >> + const struct tracing_map_sort_entry **b = &entries[i + 1];
> >> +
> >> + if (primary_fn(a, b) == 0) {
> >> + n_sub++;
> >> + if (i < n_entries - 2)
> >> + continue;
> >> + }
> >> +
> >> + if (n_sub < 2) {
> >> + start = i + 1;
> >> + n_sub = 1;
> >> + continue;
> >> + }
> >> +
> >> + set_sort_key(map, secondary_key);
> >> + sort(&entries[start], n_sub,
> >> + sizeof(struct tracing_map_sort_entry *),
> >> + (int (*)(const void *, const void *))secondary_fn, NULL);
> >> + set_sort_key(map, primary_key);
> >> +
> >> + start = i + 1;
> >> + n_sub = 1;
> >> + }
> >> +}
> >> +
> >> +/**
> >> + * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
> >> + * @map: The tracing_map
> >> + * @sort_key: The sort key to use for sorting
> >> + * @sort_entries: outval: pointer to allocated and sorted array of entries
> >> + *
> >> + * tracing_map_sort_entries() sorts the current set of entries in the
> >> + * map and returns the list of tracing_map_sort_entries containing
> >> + * them to the client in the sort_entries param. The client can
> >> + * access the struct tracing_map_elt element of interest directly as
> >> + * the 'elt' field of a returned struct tracing_map_sort_entry object.
> >> + *
> >> + * The sort_key has only two fields: idx and descending. 'idx' refers
> >> + * to the index of the field added via tracing_map_add_sum_field() or
> >> + * tracing_map_add_key_field() when the tracing_map was initialized.
> >> + * 'descending' is a flag that if set reverses the sort order, which
> >> + * by default is ascending.
> >> + *
> >> + * The client should not hold on to the returned array but should use
> >> + * it and call tracing_map_destroy_sort_entries() when done.
> >> + *
> >> + * Return: the number of sort_entries in the struct tracing_map_sort_entry
> >> + * array, negative on error
> >> + */
> >> +int tracing_map_sort_entries(struct tracing_map *map,
> >> + struct tracing_map_sort_key *sort_keys,
> >> + unsigned int n_sort_keys,
> >> + struct tracing_map_sort_entry ***sort_entries)
> >> +{
> >> + int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
> >> + const struct tracing_map_sort_entry **);
> >> + struct tracing_map_sort_entry *sort_entry, **entries;
> >> + int i, n_entries, ret;
> >> +
> >> + entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
> >> + if (!entries)
> >> + return -ENOMEM;
> >> +
> >> + for (i = 0, n_entries = 0; i < map->map_size; i++) {
> >> + if (!map->map[i].key || !map->map[i].val)
> >> + continue;
> >> +
> >> + entries[n_entries] = create_sort_entry(map->map[i].val->key,
> >> + map->map[i].val);
> >> + if (!entries[n_entries++]) {
> >> + ret = -ENOMEM;
> >> + goto free;
> >> + }
> >> + }
> >> +
> >> + if (n_entries == 0) {
> >> + ret = 0;
> >> + goto free;
> >> + }
> >> +
> >> + if (n_entries == 1) {
> >> + *sort_entries = entries;
> >> + return 1;
> >> + }
> >> +
> >> + ret = merge_dups(entries, n_entries, map->key_size);
> >> + if (ret < 0)
> >> + goto free;
> >> + n_entries -= ret;
> >> +
> >> + if (is_key(map, sort_keys[0].field_idx))
> >> + cmp_entries_fn = cmp_entries_key;
> >> + else
> >> + cmp_entries_fn = cmp_entries_sum;
> >> +
> >> + set_sort_key(map, &sort_keys[0]);
> >> +
> >> + sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
> >> + (int (*)(const void *, const void *))cmp_entries_fn, NULL);
> >> +
> >> + if (n_sort_keys > 1)
> >> + sort_secondary(map,
> >> + (const struct tracing_map_sort_entry **)entries,
> >> + n_entries,
> >> + &sort_keys[0],
> >> + &sort_keys[1]);
> >> +
> >> + *sort_entries = entries;
> >> +
> >> + return n_entries;
> >> + free:
> >> + tracing_map_destroy_sort_entries(entries, n_entries);
> >> +
> >> + return ret;
> >> +}
> >> diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
> >> new file mode 100644
> >> index 0000000..2e63c5c
> >> --- /dev/null
> >> +++ b/kernel/trace/tracing_map.h
> >> @@ -0,0 +1,258 @@
> >> +#ifndef __TRACING_MAP_H
> >> +#define __TRACING_MAP_H
> >> +
> >> +#define TRACING_MAP_BITS_DEFAULT 11
> >> +#define TRACING_MAP_BITS_MAX 17
> >> +#define TRACING_MAP_BITS_MIN 7
> >> +
> >> +#define TRACING_MAP_FIELDS_MAX 4
> >> +#define TRACING_MAP_KEYS_MAX 2
> >> +
> >> +#define TRACING_MAP_SORT_KEYS_MAX 2
> >> +
> >> +typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
> >> +
> >> +/*
> >> + * This is an overview of the tracing_map data structures and how they
> >> + * relate to the tracing_map API. The details of the algorithms
> >> + * aren't discussed here - this is just a general overview of the data
> >> + * structures and how they interact with the API.
> >> + *
> >> + * The central data structure of the tracing_map is an initially
> >> + * zeroed array of struct tracing_map_entry (stored in the map field
> >> + * of struct tracing_map). tracing_map_entry is a very simple data
> >> + * structure containing only two fields: a 32-bit unsigned 'key'
> >> + * variable and a pointer named 'val'. This array of struct
> >> + * tracing_map_entry is essentially a hash table which will be
> >> + * modified by a single function, tracing_map_insert(), but which can
> >> + * be traversed and read by a user at any time (though the user does
> >> + * this indirectly via an array of tracing_map_sort_entry - see the
> >> + * explanation of that data structure in the discussion of the
> >> + * sorting-related data structures below).
> >> + *
> >> + * The central function of the tracing_map API is
> >> + * tracing_map_insert(). tracing_map_insert() hashes the
> >> + * arbitrarily-sized key passed into it into a 32-bit unsigned key.
> >> + * It then uses this key, truncated to the array size, as an index
> >> + * into the array of tracing_map_entries. If the value of the 'key'
> >> + * field of the tracing_map_entry found at that location is 0, then
> >> + * that entry is considered to be free and can be claimed, by
> >> + * replacing the 0 in the 'key' field of the tracing_map_entry with
> >> + * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
> >> + * 'val' field is then used to store a unique element which will be
> >> + * forever associated with that 32-bit hashed key in the
> >> + * tracing_map_entry.
> >> + *
> >> + * That unique element now in the tracing_map_entry's 'val' field is
> >> + * an instance of tracing_map_elt, where 'elt' in the latter part of
> >> + * that variable name is short for 'element'. The purpose of a
> >> + * tracing_map_elt is to hold values specific to the the particular
> >> + * 32-bit hashed key it's assocated with. Things such as the unique
> >> + * set of aggregated sums associated with the 32-bit hashed key, along
> >> + * with a copy of the full key associated with the entry, and which
> >> + * was used to produce the 32-bit hashed key.
> >> + *
> >> + * When tracing_map_create() is called to create the tracing map, the
> >> + * user specifies (indirectly via the map_bits param, the details are
> >> + * unimportant for this discussion) the maximum number of elements
> >> + * that the map can hold (stored in the max_elts field of struct
> >> + * tracing_map). This is the maximum possible number of
> >> + * tracing_map_entries in the tracing_map_entry array which can be
> >> + * 'claimed' as described in the above discussion, and therefore is
> >> + * also the maximum number of tracing_map_elts that can be associated
> >> + * with the tracing_map_entry array in the tracing_map. Because of
> >> + * the way the insertion algorithm works, the size of the allocated
> >> + * tracing_map_entry array is always twice the maximum number of
> >> + * elements (2 * max_elts). This value is stored in the map_size
> >> + * field of struct tracing_map.
> >> + *
> >> + * Because tracing_map_insert() needs to work from any context,
> >> + * including from within the memory allocation functions themselves,
> >> + * both the tracing_map_entry array and a pool of max_elts
> >> + * tracing_map_elts are pre-allocated before any call is made to
> >> + * tracing_map_insert().
> >> + *
> >> + * The tracing_map_entry array is allocated as a single block by
> >> + * tracing_map_create().
> >> + *
> >> + * Because the tracing_map_elts are much larger objects and can't
> >> + * generally be allocated together as a single large array without
> >> + * failure, they're allocated individually, by tracing_map_init().
> >> + *
> >> + * The pool of tracing_map_elts are allocated by tracing_map_init()
> >> + * rather than by tracing_map_create() because at the time
> >> + * tracing_map_create() is called, there isn't enough information to
> >> + * create the tracing_map_elts. Specifically,the user first needs to
> >> + * tell the tracing_map implementation how many fields the
> >> + * tracing_map_elts contain, and which types of fields they are (key
> >> + * or sum). The user does this via the tracing_map_add_sum_field()
> >> + * and tracing_map_add_key_field() functions, following which the user
> >> + * calls tracing_map_init() to finish up the tracing map setup. The
> >> + * array holding the pointers which make up the pre-allocated pool of
> >> + * tracing_map_elts is allocated as a single block and is stored in
> >> + * the elts field of struct tracing_map.
> >> + *
> >> + * There is also a set of structures used for sorting that might
> >> + * benefit from some minimal explanation.
> >> + *
> >> + * struct tracing_map_sort_key is used to drive the sort at any given
> >> + * time. By 'any given time' we mean that a different
> >> + * tracing_map_sort_key will be used at different times depending on
> >> + * whether the sort currently being performed is a primary or a
> >> + * secondary sort.
> >> + *
> >> + * The sort key is very simple, consisting of the field index of the
> >> + * tracing_map_elt field to sort on (which the user saved when adding
> >> + * the field), and whether the sort should be done in an ascending or
> >> + * descending order.
> >> + *
> >> + * For the convenience of the sorting code, a tracing_map_sort_entry
> >> + * is created for each tracing_map_elt, again individually allocated
> >> + * to avoid failures that might be expected if allocated as a single
> >> + * large array of struct tracing_map_sort_entry.
> >> + * tracing_map_sort_entry instances are the objects expected by the
> >> + * various internal sorting functions, and are also what the user
> >> + * ultimately receives after calling tracing_map_sort_entries().
> >> + * Because it doesn't make sense for users to access an unordered and
> >> + * sparsely populated tracing_map directly, the
> >> + * tracing_map_sort_entries() function is provided so that users can
> >> + * retrieve a sorted list of all existing elements. In addition to
> >> + * the associated tracing_map_elt 'elt' field contained within the
> >> + * tracing_map_sort_entry, which is the object of interest to the
> >> + * user, tracing_map_sort_entry objects contain a number of additional
> >> + * fields which are used for caching and internal purposes and can
> >> + * safely be ignored.
> >> +*/
> >> +
> >> +struct tracing_map_field {
> >> + tracing_map_cmp_fn_t cmp_fn;
> >> + union {
> >> + atomic64_t sum;
> >> + unsigned int offset;
> >> + };
> >> +};
> >> +
> >> +struct tracing_map_elt {
> >> + struct tracing_map *map;
> >> + struct tracing_map_field *fields;
> >> + void *key;
> >> + void *private_data;
> >> +};
> >> +
> >> +struct tracing_map_entry {
> >> + u32 key;
> >> + struct tracing_map_elt *val;
> >> +};
> >> +
> >> +struct tracing_map_sort_key {
> >> + unsigned int field_idx;
> >> + bool descending;
> >> +};
> >> +
> >> +struct tracing_map_sort_entry {
> >> + void *key;
> >> + struct tracing_map_elt *elt;
> >> + bool elt_copied;
> >> + bool dup;
> >> +};
> >> +
> >> +struct tracing_map {
> >> + unsigned int key_size;
> >> + unsigned int map_bits;
> >> + unsigned int map_size;
> >> + unsigned int max_elts;
> >> + atomic_t next_elt;
> >> + struct tracing_map_elt **elts;
> >> + struct tracing_map_entry *map;
> >> + struct tracing_map_ops *ops;
> >> + void *private_data;
> >> + struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
> >> + unsigned int n_fields;
> >> + int key_idx[TRACING_MAP_KEYS_MAX];
> >> + unsigned int n_keys;
> >> + struct tracing_map_sort_key sort_key;
> >> +};
> >> +
> >> +/**
> >> + * struct tracing_map_ops - callbacks for tracing_map
> >> + *
> >> + * The methods in this structure define callback functions for various
> >> + * operations on a tracing_map or objects related to a tracing_map.
> >> + *
> >> + * For a detailed description of tracing_map_elt objects please see
> >> + * the overview of tracing_map data structures at the beginning of
> >> + * this file.
> >> + *
> >> + * All the methods below are optional.
> >> + *
> >> + * @elt_alloc: When a tracing_map_elt is allocated, this function, if
> >> + * defined, will be called and gives clients the opportunity to
> >> + * allocate additional data and attach it to the element
> >> + * (tracing_map_elt->private_data is meant for that purpose).
> >> + * Element allocation occurs before tracing begins, when the
> >> + * tracing_map_init() call is made by client code.
> >> + *
> >> + * @elt_copy: At certain points in the lifetime of an element, it may
> >> + * need to be copied. The copy should include a copy of the
> >> + * client-allocated data, which can be copied into the 'to'
> >> + * element from the 'from' element.
> >> + *
> >> + * @elt_free: When a tracing_map_elt is freed, this function is called
> >> + * and allows client-allocated per-element data to be freed.
> >> + *
> >> + * @elt_clear: This callback allows per-element client-defined data to
> >> + * be cleared, if applicable.
> >> + *
> >> + * @elt_init: This callback allows per-element client-defined data to
> >> + * be initialized when used i.e. when the element is actually
> >> + * claimed by tracing_map_insert() in the context of the map
> >> + * insertion.
> >> + */
> >> +struct tracing_map_ops {
> >> + int (*elt_alloc)(struct tracing_map_elt *elt);
> >> + void (*elt_copy)(struct tracing_map_elt *to,
> >> + struct tracing_map_elt *from);
> >> + void (*elt_free)(struct tracing_map_elt *elt);
> >> + void (*elt_clear)(struct tracing_map_elt *elt);
> >> + void (*elt_init)(struct tracing_map_elt *elt);
> >> +};
> >> +
> >> +extern struct tracing_map *tracing_map_create(unsigned int map_bits,
> >> + unsigned int key_size,
> >> + struct tracing_map_ops *ops,
> >> + void *private_data);
> >> +extern int tracing_map_init(struct tracing_map *map);
> >> +
> >> +extern int tracing_map_add_sum_field(struct tracing_map *map);
> >> +extern int tracing_map_add_key_field(struct tracing_map *map,
> >> + unsigned int offset,
> >> + tracing_map_cmp_fn_t cmp_fn);
> >> +
> >> +extern void tracing_map_destroy(struct tracing_map *map);
> >> +extern void tracing_map_clear(struct tracing_map *map);
> >> +
> >> +extern struct tracing_map_elt *
> >> +tracing_map_insert(struct tracing_map *map, void *key);
> >> +
> >> +extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
> >> + int field_is_signed);
> >> +extern int tracing_map_cmp_string(void *val_a, void *val_b);
> >> +extern int tracing_map_cmp_none(void *val_a, void *val_b);
> >> +
> >> +extern void tracing_map_update_sum(struct tracing_map_elt *elt,
> >> + unsigned int i, u64 n);
> >> +extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
> >> +extern void tracing_map_set_field_descr(struct tracing_map *map,
> >> + unsigned int i,
> >> + unsigned int key_offset,
> >> + tracing_map_cmp_fn_t cmp_fn);
> >> +extern int
> >> +tracing_map_sort_entries(struct tracing_map *map,
> >> + struct tracing_map_sort_key *sort_keys,
> >> + unsigned int n_sort_keys,
> >> + struct tracing_map_sort_entry ***sort_entries);
> >> +
> >> +extern void
> >> +tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
> >> + unsigned int n_entries);
> >> +#endif /* __TRACING_MAP_H */
> >> --
> >> 1.9.3
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> >> the body of a message to [email protected]
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >> Please read the FAQ at http://www.tux.org/lkml/
> >
> > --
> > Mathieu Desnoyers
> > EfficiOS Inc.
> > http://www.efficios.com
>

2015-07-17 01:51:51

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Fri, 2015-07-17 at 00:32 +0200, Peter Zijlstra wrote:
> On Thu, Jul 16, 2015 at 04:41:45PM -0500, Tom Zanussi wrote:
> > On Thu, 2015-07-16 at 19:49 +0200, Peter Zijlstra wrote:
> > > On Thu, Jul 16, 2015 at 12:22:40PM -0500, Tom Zanussi wrote:
> > > > + for (i = 0; i < elt->map->n_fields; i++) {
> > > > + atomic64_set(&dup_elt->fields[i].sum,
> > > > + atomic64_read(&elt->fields[i].sum));
> > > > + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> > > > + }
> > > > +
> > > > + return dup_elt;
> > > > +}
> > >
> > > So there is a lot of atomic64_{set,read}() in this patch set, what kind
> > > of magic properties do you assume they have?
> > >
> > > Note that atomic*_{set,read}() are weaker than {WRITE,READ}_ONCE(), so
> > > if you're assuming they do that, you're mistaken -- although it is on a
> > > TODO list someplace to go fix that.
> >
> > Not assuming any magic properties - I just need an atomic 64-bit counter
> > for the sums and that's the API for setting/reading those. When reading
> > a live trace the exact sum you get is kind of arbitrary..
>
> OK, so atomic64_read() really should provide load consistency (there are
> a few archs that lack the READ_ONCE() there).
>
> But the atomic64_set() does not provide store consistency, and in the
> above case it looks like the value you're writing is not exposed yet to
> concurrency so it doesn't matter how it issues the store.
>

Right, that's correct.

> So as long as you never atomic64_set() a value that is subject to
> concurrent modification you should be good.

Yeah, and that's the case elsewhere as well.

Thanks for clarifying,

Tom

2015-07-17 15:48:14

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

----- On Jul 16, 2015, at 9:35 PM, Tom Zanussi [email protected] wrote:

> Hi Mathieu,
>
> On Thu, 2015-07-16 at 23:25 +0000, Mathieu Desnoyers wrote:
>> * Tom Zanussi wrote:
>> >> Add tracing_map, a special-purpose lock-free map for tracing.
>> >>
>> >> tracing_map is designed to aggregate or 'sum' one or more values
>> >> associated with a specific object of type tracing_map_elt, which
>> >> is associated by the map to a given key.
>> >>
>> >> It provides various hooks allowing per-tracer customization and is
>> >> separated out into a separate file in order to allow it to be shared
>> >> between multiple tracers, but isn't meant to be generally used outside
>> >> of that context.
>> >>
>> >> The tracing_map implementation was inspired by lock-free map
>> >> algorithms originated by Dr. Cliff Click:
>> >>
>> >> http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
>> >> http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
>>
>> Hi Tom,
>>
>> First question: what is the rationale for implementing another
>> hash table from scratch here ? What is missing in the pre-existing
>> hash table implementations ?
>>
>
> None of the other hash tables allow for lock-free insertion (and I
> didn't see an easy way to add it).

This is one of the nice things about the Userspace RCU lock-free hash
table we've done a few years ago: it provides lock-free add, add_unique,
removal, and replace, as well as RCU wait-free lookups and traversals.
Resize can be done concurrently by a worker thread. I ported it to the
Linux kernel for Julien's work on latency tracker. You can find the
implementation here: https://github.com/jdesfossez/latency_tracker
(see rculfhash*)
It is a simplified version that has the "resize" feature removed for
simplicity sake. The "insert and lookup" feature you need is called
"add_unique" in our API: it behaves both as a lookup and as an atomic
insert if the key is not found.

Thanks,

Mathieu

>
>> Moreover, you might want to handle the case where jhash() returns
>> 0. AFAIU, there is a race on "insert" in this scenario.
>>
>
> You're right, in that case you'd accidentally overwrite an already
> claimed slot. Thanks for pointing that out.
>
> Tom
>
>> Thanks,
>>
>> Mathieu
>>
>> >>
>> >> Signed-off-by: Tom Zanussi <[email protected]>
>> >> ---
>> >> kernel/trace/Makefile | 1 +
>> >> kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
>> >> kernel/trace/tracing_map.h | 258 +++++++++++++
>> >> 3 files changed, 1194 insertions(+)
>> >> create mode 100644 kernel/trace/tracing_map.c
>> >> create mode 100644 kernel/trace/tracing_map.h
>> >>
>> >> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
>> >> index 9b1044e..3b26cfb 100644
>> >> --- a/kernel/trace/Makefile
>> >> +++ b/kernel/trace/Makefile
>> >> @@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
>> >> obj-$(CONFIG_TRACING) += trace_seq.o
>> >> obj-$(CONFIG_TRACING) += trace_stat.o
>> >> obj-$(CONFIG_TRACING) += trace_printk.o
>> >> +obj-$(CONFIG_TRACING) += tracing_map.o
>> >> obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
>> >> obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
>> >> obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
>> >> diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
>> >> new file mode 100644
>> >> index 0000000..a505025
>> >> --- /dev/null
>> >> +++ b/kernel/trace/tracing_map.c
>> >> @@ -0,0 +1,935 @@
>> >> +/*
>> >> + * tracing_map - lock-free map for tracing
>> >> + *
>> >> + * This program is free software; you can redistribute it and/or modify
>> >> + * it under the terms of the GNU General Public License as published by
>> >> + * the Free Software Foundation; either version 2 of the License, or
>> >> + * (at your option) any later version.
>> >> + *
>> >> + * This program is distributed in the hope that it will be useful,
>> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> >> + * GNU General Public License for more details.
>> >> + *
>> >> + * Copyright (C) 2015 Tom Zanussi <[email protected]>
>> >> + *
>> >> + * tracing_map implementation inspired by lock-free map algorithms
>> >> + * originated by Dr. Cliff Click:
>> >> + *
>> >> + * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
>> >> + * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
>> >> + */
>> >> +
>> >> +#include <linux/slab.h>
>> >> +#include <linux/jhash.h>
>> >> +#include <linux/sort.h>
>> >> +
>> >> +#include "tracing_map.h"
>> >> +#include "trace.h"
>> >> +
>> >> +/*
>> >> + * NOTE: For a detailed description of the data structures used by
>> >> + * these functions (such as tracing_map_elt) please see the overview
>> >> + * of tracing_map data structures at the beginning of tracing_map.h.
>> >> + */
>> >> +
>> >> +/**
>> >> + * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
>> >> + * @elt: The tracing_map_elt
>> >> + * @i: The index of the given sum associated with the tracing_map_elt
>> >> + * @n: The value to add to the sum
>> >> + *
>> >> + * Add n to sum i associated with the specified tracing_map_elt
>> >> + * instance. The index i is the index returned by the call to
>> >> + * tracing_map_add_sum_field() when the tracing map was set up.
>> >> + */
>> >> +void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
>> >> +{
>> >> + atomic64_add(n, &elt->fields[i].sum);
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
>> >> + * @elt: The tracing_map_elt
>> >> + * @i: The index of the given sum associated with the tracing_map_elt
>> >> + *
>> >> + * Retrieve the value of the sum i associated with the specified
>> >> + * tracing_map_elt instance. The index i is the index returned by the
>> >> + * call to tracing_map_add_sum_field() when the tracing map was set
>> >> + * up.
>> >> + *
>> >> + * Return: The sum associated with field i for elt.
>> >> + */
>> >> +u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
>> >> +{
>> >> + return (u64)atomic64_read(&elt->fields[i].sum);
>> >> +}
>> >> +
>> >> +int tracing_map_cmp_string(void *val_a, void *val_b)
>> >> +{
>> >> + char *a = val_a;
>> >> + char *b = val_b;
>> >> +
>> >> + return strcmp(a, b);
>> >> +}
>> >> +
>> >> +int tracing_map_cmp_none(void *val_a, void *val_b)
>> >> +{
>> >> + return 0;
>> >> +}
>> >> +
>> >> +static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
>> >> +{
>> >> + u64 a = atomic64_read((atomic64_t *)val_a);
>> >> + u64 b = atomic64_read((atomic64_t *)val_b);
>> >> +
>> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0);
>> >> +}
>> >> +
>> >> +#define DEFINE_TRACING_MAP_CMP_FN(type) \
>> >> +static int tracing_map_cmp_##type(void *val_a, void *val_b) \
>> >> +{ \
>> >> + type a = *(type *)val_a; \
>> >> + type b = *(type *)val_b; \
>> >> + \
>> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0); \
>> >> +}
>> >> +
>> >> +DEFINE_TRACING_MAP_CMP_FN(s64);
>> >> +DEFINE_TRACING_MAP_CMP_FN(u64);
>> >> +DEFINE_TRACING_MAP_CMP_FN(s32);
>> >> +DEFINE_TRACING_MAP_CMP_FN(u32);
>> >> +DEFINE_TRACING_MAP_CMP_FN(s16);
>> >> +DEFINE_TRACING_MAP_CMP_FN(u16);
>> >> +DEFINE_TRACING_MAP_CMP_FN(s8);
>> >> +DEFINE_TRACING_MAP_CMP_FN(u8);
>> >> +
>> >> +tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
>> >> + int field_is_signed)
>> >> +{
>> >> + tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
>> >> +
>> >> + switch (field_size) {
>> >> + case 8:
>> >> + if (field_is_signed)
>> >> + fn = tracing_map_cmp_s64;
>> >> + else
>> >> + fn = tracing_map_cmp_u64;
>> >> + break;
>> >> + case 4:
>> >> + if (field_is_signed)
>> >> + fn = tracing_map_cmp_s32;
>> >> + else
>> >> + fn = tracing_map_cmp_u32;
>> >> + break;
>> >> + case 2:
>> >> + if (field_is_signed)
>> >> + fn = tracing_map_cmp_s16;
>> >> + else
>> >> + fn = tracing_map_cmp_u16;
>> >> + break;
>> >> + case 1:
>> >> + if (field_is_signed)
>> >> + fn = tracing_map_cmp_s8;
>> >> + else
>> >> + fn = tracing_map_cmp_u8;
>> >> + break;
>> >> + }
>> >> +
>> >> + return fn;
>> >> +}
>> >> +
>> >> +static int tracing_map_add_field(struct tracing_map *map,
>> >> + tracing_map_cmp_fn_t cmp_fn)
>> >> +{
>> >> + int ret = -EINVAL;
>> >> +
>> >> + if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
>> >> + ret = map->n_fields;
>> >> + map->fields[map->n_fields++].cmp_fn = cmp_fn;
>> >> + }
>> >> +
>> >> + return ret;
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_add_sum_field - Add a field describing a tracing_map sum
>> >> + * @map: The tracing_map
>> >> + *
>> >> + * Add a sum field to the key and return the index identifying it in
>> >> + * the map and associated tracing_map_elts. This is the index used
>> >> + * for instance to update a sum for a particular tracing_map_elt using
>> >> + * tracing_map_update_sum() or reading it via tracing_map_read_sum().
>> >> + *
>> >> + * Return: The index identifying the field in the map and associated
>> >> + * tracing_map_elts.
>> >> + */
>> >> +int tracing_map_add_sum_field(struct tracing_map *map)
>> >> +{
>> >> + return tracing_map_add_field(map, tracing_map_cmp_atomic64);
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_add_key_field - Add a field describing a tracing_map key
>> >> + * @map: The tracing_map
>> >> + * @offset: The offset within the key
>> >> + * @cmp_fn: The comparison function that will be used to sort on the key
>> >> + *
>> >> + * Let the map know there is a key and that if it's used as a sort key
>> >> + * to use cmp_fn.
>> >> + *
>> >> + * A key can be a subset of a compound key; for that purpose, the
>> >> + * offset param is used to describe where within the the compound key
>> >> + * the key referenced by this key field resides.
>> >> + *
>> >> + * Return: The index identifying the field in the map and associated
>> >> + * tracing_map_elts.
>> >> + */
>> >> +int tracing_map_add_key_field(struct tracing_map *map,
>> >> + unsigned int offset,
>> >> + tracing_map_cmp_fn_t cmp_fn)
>> >> +
>> >> +{
>> >> + int idx = tracing_map_add_field(map, cmp_fn);
>> >> +
>> >> + if (idx < 0)
>> >> + return idx;
>> >> +
>> >> + map->fields[idx].offset = offset;
>> >> +
>> >> + map->key_idx[map->n_keys++] = idx;
>> >> +
>> >> + return idx;
>> >> +}
>> >> +
>> >> +static void tracing_map_elt_clear(struct tracing_map_elt *elt)
>> >> +{
>> >> + unsigned i;
>> >> +
>> >> + for (i = 0; i < elt->map->n_fields; i++)
>> >> + if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
>> >> + atomic64_set(&elt->fields[i].sum, 0);
>> >> +
>> >> + if (elt->map->ops && elt->map->ops->elt_clear)
>> >> + elt->map->ops->elt_clear(elt);
>> >> +}
>> >> +
>> >> +static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
>> >> +{
>> >> + unsigned int i;
>> >> +
>> >> + tracing_map_elt_clear(elt);
>> >> +
>> >> + for (i = 0; i < elt->map->n_fields; i++) {
>> >> + elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
>> >> +
>> >> + if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
>> >> + elt->fields[i].offset = elt->map->fields[i].offset;
>> >> + }
>> >> +}
>> >> +
>> >> +static void tracing_map_elt_free(struct tracing_map_elt *elt)
>> >> +{
>> >> + if (!elt)
>> >> + return;
>> >> +
>> >> + if (elt->map->ops && elt->map->ops->elt_free)
>> >> + elt->map->ops->elt_free(elt);
>> >> + kfree(elt->fields);
>> >> + kfree(elt->key);
>> >> + kfree(elt);
>> >> +}
>> >> +
>> >> +static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
>> >> +{
>> >> + struct tracing_map_elt *elt;
>> >> + int err = 0;
>> >> +
>> >> + elt = kzalloc(sizeof(*elt), GFP_KERNEL);
>> >> + if (!elt)
>> >> + return ERR_PTR(-ENOMEM);
>> >> +
>> >> + elt->map = map;
>> >> +
>> >> + elt->key = kzalloc(map->key_size, GFP_KERNEL);
>> >> + if (!elt->key) {
>> >> + err = -ENOMEM;
>> >> + goto free;
>> >> + }
>> >> +
>> >> + elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
>> >> + if (!elt->fields) {
>> >> + err = -ENOMEM;
>> >> + goto free;
>> >> + }
>> >> +
>> >> + tracing_map_elt_init_fields(elt);
>> >> +
>> >> + if (map->ops && map->ops->elt_alloc) {
>> >> + err = map->ops->elt_alloc(elt);
>> >> + if (err)
>> >> + goto free;
>> >> + }
>> >> + return elt;
>> >> + free:
>> >> + tracing_map_elt_free(elt);
>> >> +
>> >> + return ERR_PTR(err);
>> >> +}
>> >> +
>> >> +static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
>> >> +{
>> >> + struct tracing_map_elt *elt = NULL;
>> >> + int idx;
>> >> +
>> >> + idx = atomic_inc_return(&map->next_elt);
>> >> + if (idx < map->max_elts) {
>> >> + elt = map->elts[idx];
>> >> + if (map->ops && map->ops->elt_init)
>> >> + map->ops->elt_init(elt);
>> >> + }
>> >> +
>> >> + return elt;
>> >> +}
>> >> +
>> >> +static void tracing_map_free_elts(struct tracing_map *map)
>> >> +{
>> >> + unsigned int i;
>> >> +
>> >> + if (!map->elts)
>> >> + return;
>> >> +
>> >> + for (i = 0; i < map->max_elts; i++)
>> >> + tracing_map_elt_free(map->elts[i]);
>> >> +
>> >> + kfree(map->elts);
>> >> +}
>> >> +
>> >> +static int tracing_map_alloc_elts(struct tracing_map *map)
>> >> +{
>> >> + unsigned int i;
>> >> +
>> >> + map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
>> >> + GFP_KERNEL);
>> >> + if (!map->elts)
>> >> + return -ENOMEM;
>> >> +
>> >> + for (i = 0; i < map->max_elts; i++) {
>> >> + map->elts[i] = tracing_map_elt_alloc(map);
>> >> + if (!map->elts[i]) {
>> >> + tracing_map_free_elts(map);
>> >> +
>> >> + return -ENOMEM;
>> >> + }
>> >> + }
>> >> +
>> >> + return 0;
>> >> +}
>> >> +
>> >> +static inline bool keys_match(void *key, void *test_key, unsigned key_size)
>> >> +{
>> >> + bool match = true;
>> >> +
>> >> + if (memcmp(key, test_key, key_size))
>> >> + match = false;
>> >> +
>> >> + return match;
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
>> >> + * @map: The tracing_map to insert into
>> >> + * @key: The key to insert
>> >> + *
>> >> + * Inserts a key into a tracing_map and creates and returns a new
>> >> + * tracing_map_elt for it, or if the key has already been inserted by
>> >> + * a previous call, returns the tracing_map_elt already associated
>> >> + * with it. When the map was created, the number of elements to be
>> >> + * allocated for the map was specified (internally maintained as
>> >> + * 'max_elts' in struct tracing_map), and that number of
>> >> + * tracing_map_elts was created by tracing_map_init(). This is the
>> >> + * pre-allocated pool of tracing_map_elts that tracing_map_insert()
>> >> + * will allocate from when adding new keys. Once that pool is
>> >> + * exhausted, tracing_map_insert() is useless and will return NULL to
>> >> + * signal that state.
>> >> + *
>> >> + * This is a lock-free tracing map insertion function implementing a
>> >> + * modified form of Cliff Click's basic insertion algorithm. It
>> >> + * requires the table size be a power of two. To prevent any
>> >> + * possibility of an infinite loop we always make the internal table
>> >> + * size double the size of the requested table size (max_elts * 2).
>> >> + * Likewise, we never reuse a slot or resize or delete elements - when
>> >> + * we've reached max_elts entries, we simply return NULL once we've
>> >> + * run out of entries. Readers can at any point in time traverse the
>> >> + * tracing map and safely access the key/val pairs.
>> >> + *
>> >> + * Return: the tracing_map_elt pointer val associated with the key.
>> >> + * If this was a newly inserted key, the val will be a newly allocated
>> >> + * and associated tracing_map_elt pointer val. If the key wasn't
>> >> + * found and the pool of tracing_map_elts has been exhausted, NULL is
>> >> + * returned and no further insertions will succeed.
>> >> + */
>> >> +struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
>> >> +{
>> >> + u32 idx, key_hash, test_key;
>> >> +
>> >> + key_hash = jhash(key, map->key_size, 0);
>> >> + idx = key_hash >> (32 - (map->map_bits + 1));
>> >> +
>> >> + while (1) {
>> >> + idx &= (map->map_size - 1);
>> >> + test_key = map->map[idx].key;
>> >> +
>> >> + if (test_key && test_key == key_hash && map->map[idx].val &&
>> >> + keys_match(key, map->map[idx].val->key, map->key_size))
>> >> + return map->map[idx].val;
>> >> +
>> >> + if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
>> >> + struct tracing_map_elt *elt;
>> >> +
>> >> + elt = get_free_elt(map);
>> >> + if (!elt)
>> >> + break;
>> >> + memcpy(elt->key, key, map->key_size);
>> >> + map->map[idx].val = elt;
>> >> +
>> >> + return map->map[idx].val;
>> >> + }
>> >> + idx++;
>> >> + }
>> >> +
>> >> + return NULL;
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_destroy - Destroy a tracing_map
>> >> + * @map: The tracing_map to destroy
>> >> + *
>> >> + * Frees a tracing_map along with its associated array of
>> >> + * tracing_map_elts.
>> >> + *
>> >> + * Callers should make sure there are no readers or writers actively
>> >> + * reading or inserting into the map before calling this.
>> >> + */
>> >> +void tracing_map_destroy(struct tracing_map *map)
>> >> +{
>> >> + if (!map)
>> >> + return;
>> >> +
>> >> + tracing_map_free_elts(map);
>> >> +
>> >> + kfree(map->map);
>> >> + kfree(map);
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_clear - Clear a tracing_map
>> >> + * @map: The tracing_map to clear
>> >> + *
>> >> + * Resets the tracing map to a cleared or initial state. The
>> >> + * tracing_map_elts are all cleared, and the array of struct
>> >> + * tracing_map_entry is reset to an initialized state.
>> >> + *
>> >> + * Callers should make sure there are no writers actively inserting
>> >> + * into the map before calling this.
>> >> + */
>> >> +void tracing_map_clear(struct tracing_map *map)
>> >> +{
>> >> + unsigned int i, size;
>> >> +
>> >> + atomic_set(&map->next_elt, -1);
>> >> +
>> >> + size = map->map_size * sizeof(struct tracing_map_entry);
>> >> + memset(map->map, 0, size);
>> >> +
>> >> + for (i = 0; i < map->max_elts; i++)
>> >> + tracing_map_elt_clear(map->elts[i]);
>> >> +}
>> >> +
>> >> +static void set_sort_key(struct tracing_map *map,
>> >> + struct tracing_map_sort_key *sort_key)
>> >> +{
>> >> + map->sort_key = *sort_key;
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_create - Create a lock-free map and element pool
>> >> + * @map_bits: The size of the map (2 ** map_bits)
>> >> + * @key_size: The size of the key for the map in bytes
>> >> + * @ops: Optional client-defined tracing_map_ops instance
>> >> + * @private_data: Client data associated with the map
>> >> + *
>> >> + * Creates and sets up a map to contain 2 ** map_bits number of
>> >> + * elements (internally maintained as 'max_elts' in struct
>> >> + * tracing_map). Before using, map fields should be added to the map
>> >> + * with tracing_map_add_sum_field() and tracing_map_add_key_field().
>> >> + * tracing_map_init() should then be called to allocate the array of
>> >> + * tracing_map_elts, in order to avoid allocating anything in the map
>> >> + * insertion path. The user-specified map size reflects the maximum
>> >> + * number of elements that can be contained in the table requested by
>> >> + * the user - internally we double that in order to keep the table
>> >> + * sparse and keep collisions manageable.
>> >> + *
>> >> + * A tracing_map is a special-purpose map designed to aggregate or
>> >> + * 'sum' one or more values associated with a specific object of type
>> >> + * tracing_map_elt, which is attached by the map to a given key.
>> >> + *
>> >> + * tracing_map_create() sets up the map itself, and provides
>> >> + * operations for inserting tracing_map_elts, but doesn't allocate the
>> >> + * tracing_map_elts themselves, or provide a means for describing the
>> >> + * keys or sums associated with the tracing_map_elts. All
>> >> + * tracing_map_elts for a given map have the same set of sums and
>> >> + * keys, which are defined by the client using the functions
>> >> + * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
>> >> + * the fields are defined, the pool of elements allocated for the map
>> >> + * can be created, which occurs when the client code calls
>> >> + * tracing_map_init().
>> >> + *
>> >> + * When tracing_map_init() returns, tracing_map_elt elements can be
>> >> + * inserted into the map using tracing_map_insert(). When called,
>> >> + * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
>> >> + * finds an existing match in the map and in either case returns it.
>> >> + * The client can then use tracing_map_update_sum() and
>> >> + * tracing_map_read_sum() to update or read a given sum field for the
>> >> + * tracing_map_elt.
>> >> + *
>> >> + * The client can at any point retrieve and traverse the current set
>> >> + * of inserted tracing_map_elts in a tracing_map, via
>> >> + * tracing_map_sort_entries(). Sorting can be done on any field,
>> >> + * including keys.
>> >> + *
>> >> + * See tracing_map.h for a description of tracing_map_ops.
>> >> + *
>> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
>> >> + */
>> >> +struct tracing_map *tracing_map_create(unsigned int map_bits,
>> >> + unsigned int key_size,
>> >> + struct tracing_map_ops *ops,
>> >> + void *private_data)
>> >> +{
>> >> + struct tracing_map *map;
>> >> + unsigned int i;
>> >> +
>> >> + if (map_bits < TRACING_MAP_BITS_MIN ||
>> >> + map_bits > TRACING_MAP_BITS_MAX)
>> >> + return ERR_PTR(-EINVAL);
>> >> +
>> >> + map = kzalloc(sizeof(*map), GFP_KERNEL);
>> >> + if (!map)
>> >> + return ERR_PTR(-ENOMEM);
>> >> +
>> >> + map->map_bits = map_bits;
>> >> + map->max_elts = (1 << map_bits);
>> >> + atomic_set(&map->next_elt, -1);
>> >> +
>> >> + map->map_size = (1 << (map_bits + 1));
>> >> + map->ops = ops;
>> >> +
>> >> + map->private_data = private_data;
>> >> +
>> >> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
>> >> + GFP_KERNEL);
>> >> + if (!map->map)
>> >> + goto free;
>> >> +
>> >> + map->key_size = key_size;
>> >> + for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
>> >> + map->key_idx[i] = -1;
>> >> + out:
>> >> + return map;
>> >> + free:
>> >> + tracing_map_destroy(map);
>> >> + map = ERR_PTR(-ENOMEM);
>> >> +
>> >> + goto out;
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_init - Allocate and clear a map's tracing_map_elts
>> >> + * @map: The tracing_map to initialize
>> >> + *
>> >> + * Allocates a clears a pool of tracing_map_elts equal to the
>> >> + * user-specified size of 2 ** map_bits (internally maintained as
>> >> + * 'max_elts' in struct tracing_map). Before using, the map fields
>> >> + * should be added to the map with tracing_map_add_sum_field() and
>> >> + * tracing_map_add_key_field(). tracing_map_init() should then be
>> >> + * called to allocate the array of tracing_map_elts, in order to avoid
>> >> + * allocating anything in the map insertion path. The user-specified
>> >> + * map size reflects the max number of elements requested by the user
>> >> + * - internally we double that in order to keep the table sparse and
>> >> + * keep collisions manageable.
>> >> + *
>> >> + * See tracing_map.h for a description of tracing_map_ops.
>> >> + *
>> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
>> >> + */
>> >> +int tracing_map_init(struct tracing_map *map)
>> >> +{
>> >> + int err;
>> >> +
>> >> + if (map->n_fields < 2)
>> >> + return -EINVAL; /* need at least 1 key and 1 val */
>> >> +
>> >> + err = tracing_map_alloc_elts(map);
>> >> + if (err)
>> >> + return err;
>> >> +
>> >> + tracing_map_clear(map);
>> >> +
>> >> + return err;
>> >> +}
>> >> +
>> >> +static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
>> >> + const struct tracing_map_sort_entry **b)
>> >> +{
>> >> + int ret = 0;
>> >> +
>> >> + if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
>> >> + ret = 1;
>> >> +
>> >> + return ret;
>> >> +}
>> >> +
>> >> +static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
>> >> + const struct tracing_map_sort_entry **b)
>> >> +{
>> >> + const struct tracing_map_elt *elt_a, *elt_b;
>> >> + struct tracing_map_sort_key *sort_key;
>> >> + struct tracing_map_field *field;
>> >> + tracing_map_cmp_fn_t cmp_fn;
>> >> + void *val_a, *val_b;
>> >> + int ret = 0;
>> >> +
>> >> + elt_a = (*a)->elt;
>> >> + elt_b = (*b)->elt;
>> >> +
>> >> + sort_key = &elt_a->map->sort_key;
>> >> +
>> >> + field = &elt_a->fields[sort_key->field_idx];
>> >> + cmp_fn = field->cmp_fn;
>> >> +
>> >> + val_a = &elt_a->fields[sort_key->field_idx].sum;
>> >> + val_b = &elt_b->fields[sort_key->field_idx].sum;
>> >> +
>> >> + ret = cmp_fn(val_a, val_b);
>> >> + if (sort_key->descending)
>> >> + ret = -ret;
>> >> +
>> >> + return ret;
>> >> +}
>> >> +
>> >> +static int cmp_entries_key(const struct tracing_map_sort_entry **a,
>> >> + const struct tracing_map_sort_entry **b)
>> >> +{
>> >> + const struct tracing_map_elt *elt_a, *elt_b;
>> >> + struct tracing_map_sort_key *sort_key;
>> >> + struct tracing_map_field *field;
>> >> + tracing_map_cmp_fn_t cmp_fn;
>> >> + void *val_a, *val_b;
>> >> + int ret = 0;
>> >> +
>> >> + elt_a = (*a)->elt;
>> >> + elt_b = (*b)->elt;
>> >> +
>> >> + sort_key = &elt_a->map->sort_key;
>> >> +
>> >> + field = &elt_a->fields[sort_key->field_idx];
>> >> +
>> >> + cmp_fn = field->cmp_fn;
>> >> +
>> >> + val_a = elt_a->key + field->offset;
>> >> + val_b = elt_b->key + field->offset;
>> >> +
>> >> + ret = cmp_fn(val_a, val_b);
>> >> + if (sort_key->descending)
>> >> + ret = -ret;
>> >> +
>> >> + return ret;
>> >> +}
>> >> +
>> >> +static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
>> >> +{
>> >> + if (!entry)
>> >> + return;
>> >> +
>> >> + if (entry->elt_copied)
>> >> + tracing_map_elt_free(entry->elt);
>> >> +
>> >> + kfree(entry);
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
>> >> + * @entries: The entries to destroy
>> >> + * @n_entries: The number of entries in the array
>> >> + *
>> >> + * Destroy the elements returned by a tracing_map_sort_entries() call.
>> >> + */
>> >> +void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
>> >> + unsigned int n_entries)
>> >> +{
>> >> + unsigned int i;
>> >> +
>> >> + for (i = 0; i < n_entries; i++)
>> >> + destroy_sort_entry(entries[i]);
>> >> +}
>> >> +
>> >> +static struct tracing_map_sort_entry *
>> >> +create_sort_entry(void *key, struct tracing_map_elt *elt)
>> >> +{
>> >> + struct tracing_map_sort_entry *sort_entry;
>> >> +
>> >> + sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
>> >> + if (!sort_entry)
>> >> + return NULL;
>> >> +
>> >> + sort_entry->key = key;
>> >> + sort_entry->elt = elt;
>> >> +
>> >> + return sort_entry;
>> >> +}
>> >> +
>> >> +static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
>> >> +{
>> >> + struct tracing_map_elt *dup_elt;
>> >> + unsigned int i;
>> >> +
>> >> + dup_elt = tracing_map_elt_alloc(elt->map);
>> >> + if (!dup_elt)
>> >> + return NULL;
>> >> +
>> >> + if (elt->map->ops && elt->map->ops->elt_copy)
>> >> + elt->map->ops->elt_copy(dup_elt, elt);
>> >> +
>> >> + dup_elt->private_data = elt->private_data;
>> >> + memcpy(dup_elt->key, elt->key, elt->map->key_size);
>> >> +
>> >> + for (i = 0; i < elt->map->n_fields; i++) {
>> >> + atomic64_set(&dup_elt->fields[i].sum,
>> >> + atomic64_read(&elt->fields[i].sum));
>> >> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
>> >> + }
>> >> +
>> >> + return dup_elt;
>> >> +}
>> >> +
>> >> +static int merge_dup(struct tracing_map_sort_entry **sort_entries,
>> >> + unsigned int target, unsigned int dup)
>> >> +{
>> >> + struct tracing_map_elt *target_elt, *elt;
>> >> + bool first_dup = (target - dup) == 1;
>> >> + int i;
>> >> +
>> >> + if (first_dup) {
>> >> + elt = sort_entries[target]->elt;
>> >> + target_elt = copy_elt(elt);
>> >> + if (!target_elt)
>> >> + return -ENOMEM;
>> >> + sort_entries[target]->elt = target_elt;
>> >> + sort_entries[target]->elt_copied = true;
>> >> + } else
>> >> + target_elt = sort_entries[target]->elt;
>> >> +
>> >> + elt = sort_entries[dup]->elt;
>> >> +
>> >> + for (i = 0; i < elt->map->n_fields; i++)
>> >> + atomic64_add(atomic64_read(&elt->fields[i].sum),
>> >> + &target_elt->fields[i].sum);
>> >> +
>> >> + sort_entries[dup]->dup = true;
>> >> +
>> >> + return 0;
>> >> +}
>> >> +
>> >> +static int merge_dups(struct tracing_map_sort_entry **sort_entries,
>> >> + int n_entries, unsigned int key_size)
>> >> +{
>> >> + unsigned int dups = 0, total_dups = 0;
>> >> + int err, i, j;
>> >> + void *key;
>> >> +
>> >> + if (n_entries < 2)
>> >> + return total_dups;
>> >> +
>> >> + sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
>> >> + (int (*)(const void *, const void *))cmp_entries_dup, NULL);
>> >> +
>> >> + key = sort_entries[0]->key;
>> >> + for (i = 1; i < n_entries; i++) {
>> >> + if (!memcmp(sort_entries[i]->key, key, key_size)) {
>> >> + dups++; total_dups++;
>> >> + err = merge_dup(sort_entries, i - dups, i);
>> >> + if (err)
>> >> + return err;
>> >> + continue;
>> >> + }
>> >> + key = sort_entries[i]->key;
>> >> + dups = 0;
>> >> + }
>> >> +
>> >> + if (!total_dups)
>> >> + return total_dups;
>> >> +
>> >> + for (i = 0, j = 0; i < n_entries; i++) {
>> >> + if (!sort_entries[i]->dup) {
>> >> + sort_entries[j] = sort_entries[i];
>> >> + if (j++ != i)
>> >> + sort_entries[i] = NULL;
>> >> + } else {
>> >> + destroy_sort_entry(sort_entries[i]);
>> >> + sort_entries[i] = NULL;
>> >> + }
>> >> + }
>> >> +
>> >> + return total_dups;
>> >> +}
>> >> +
>> >> +static bool is_key(struct tracing_map *map, unsigned int field_idx)
>> >> +{
>> >> + unsigned int i;
>> >> +
>> >> + for (i = 0; i < map->n_keys; i++)
>> >> + if (map->key_idx[i] == field_idx)
>> >> + return true;
>> >> + return false;
>> >> +}
>> >> +
>> >> +static void sort_secondary(struct tracing_map *map,
>> >> + const struct tracing_map_sort_entry **entries,
>> >> + unsigned int n_entries,
>> >> + struct tracing_map_sort_key *primary_key,
>> >> + struct tracing_map_sort_key *secondary_key)
>> >> +{
>> >> + int (*primary_fn)(const struct tracing_map_sort_entry **,
>> >> + const struct tracing_map_sort_entry **);
>> >> + int (*secondary_fn)(const struct tracing_map_sort_entry **,
>> >> + const struct tracing_map_sort_entry **);
>> >> + unsigned i, start = 0, n_sub = 1;
>> >> +
>> >> + if (is_key(map, primary_key->field_idx))
>> >> + primary_fn = cmp_entries_key;
>> >> + else
>> >> + primary_fn = cmp_entries_sum;
>> >> +
>> >> + if (is_key(map, secondary_key->field_idx))
>> >> + secondary_fn = cmp_entries_key;
>> >> + else
>> >> + secondary_fn = cmp_entries_sum;
>> >> +
>> >> + for (i = 0; i < n_entries - 1; i++) {
>> >> + const struct tracing_map_sort_entry **a = &entries[i];
>> >> + const struct tracing_map_sort_entry **b = &entries[i + 1];
>> >> +
>> >> + if (primary_fn(a, b) == 0) {
>> >> + n_sub++;
>> >> + if (i < n_entries - 2)
>> >> + continue;
>> >> + }
>> >> +
>> >> + if (n_sub < 2) {
>> >> + start = i + 1;
>> >> + n_sub = 1;
>> >> + continue;
>> >> + }
>> >> +
>> >> + set_sort_key(map, secondary_key);
>> >> + sort(&entries[start], n_sub,
>> >> + sizeof(struct tracing_map_sort_entry *),
>> >> + (int (*)(const void *, const void *))secondary_fn, NULL);
>> >> + set_sort_key(map, primary_key);
>> >> +
>> >> + start = i + 1;
>> >> + n_sub = 1;
>> >> + }
>> >> +}
>> >> +
>> >> +/**
>> >> + * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
>> >> + * @map: The tracing_map
>> >> + * @sort_key: The sort key to use for sorting
>> >> + * @sort_entries: outval: pointer to allocated and sorted array of entries
>> >> + *
>> >> + * tracing_map_sort_entries() sorts the current set of entries in the
>> >> + * map and returns the list of tracing_map_sort_entries containing
>> >> + * them to the client in the sort_entries param. The client can
>> >> + * access the struct tracing_map_elt element of interest directly as
>> >> + * the 'elt' field of a returned struct tracing_map_sort_entry object.
>> >> + *
>> >> + * The sort_key has only two fields: idx and descending. 'idx' refers
>> >> + * to the index of the field added via tracing_map_add_sum_field() or
>> >> + * tracing_map_add_key_field() when the tracing_map was initialized.
>> >> + * 'descending' is a flag that if set reverses the sort order, which
>> >> + * by default is ascending.
>> >> + *
>> >> + * The client should not hold on to the returned array but should use
>> >> + * it and call tracing_map_destroy_sort_entries() when done.
>> >> + *
>> >> + * Return: the number of sort_entries in the struct tracing_map_sort_entry
>> >> + * array, negative on error
>> >> + */
>> >> +int tracing_map_sort_entries(struct tracing_map *map,
>> >> + struct tracing_map_sort_key *sort_keys,
>> >> + unsigned int n_sort_keys,
>> >> + struct tracing_map_sort_entry ***sort_entries)
>> >> +{
>> >> + int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
>> >> + const struct tracing_map_sort_entry **);
>> >> + struct tracing_map_sort_entry *sort_entry, **entries;
>> >> + int i, n_entries, ret;
>> >> +
>> >> + entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
>> >> + if (!entries)
>> >> + return -ENOMEM;
>> >> +
>> >> + for (i = 0, n_entries = 0; i < map->map_size; i++) {
>> >> + if (!map->map[i].key || !map->map[i].val)
>> >> + continue;
>> >> +
>> >> + entries[n_entries] = create_sort_entry(map->map[i].val->key,
>> >> + map->map[i].val);
>> >> + if (!entries[n_entries++]) {
>> >> + ret = -ENOMEM;
>> >> + goto free;
>> >> + }
>> >> + }
>> >> +
>> >> + if (n_entries == 0) {
>> >> + ret = 0;
>> >> + goto free;
>> >> + }
>> >> +
>> >> + if (n_entries == 1) {
>> >> + *sort_entries = entries;
>> >> + return 1;
>> >> + }
>> >> +
>> >> + ret = merge_dups(entries, n_entries, map->key_size);
>> >> + if (ret < 0)
>> >> + goto free;
>> >> + n_entries -= ret;
>> >> +
>> >> + if (is_key(map, sort_keys[0].field_idx))
>> >> + cmp_entries_fn = cmp_entries_key;
>> >> + else
>> >> + cmp_entries_fn = cmp_entries_sum;
>> >> +
>> >> + set_sort_key(map, &sort_keys[0]);
>> >> +
>> >> + sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
>> >> + (int (*)(const void *, const void *))cmp_entries_fn, NULL);
>> >> +
>> >> + if (n_sort_keys > 1)
>> >> + sort_secondary(map,
>> >> + (const struct tracing_map_sort_entry **)entries,
>> >> + n_entries,
>> >> + &sort_keys[0],
>> >> + &sort_keys[1]);
>> >> +
>> >> + *sort_entries = entries;
>> >> +
>> >> + return n_entries;
>> >> + free:
>> >> + tracing_map_destroy_sort_entries(entries, n_entries);
>> >> +
>> >> + return ret;
>> >> +}
>> >> diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
>> >> new file mode 100644
>> >> index 0000000..2e63c5c
>> >> --- /dev/null
>> >> +++ b/kernel/trace/tracing_map.h
>> >> @@ -0,0 +1,258 @@
>> >> +#ifndef __TRACING_MAP_H
>> >> +#define __TRACING_MAP_H
>> >> +
>> >> +#define TRACING_MAP_BITS_DEFAULT 11
>> >> +#define TRACING_MAP_BITS_MAX 17
>> >> +#define TRACING_MAP_BITS_MIN 7
>> >> +
>> >> +#define TRACING_MAP_FIELDS_MAX 4
>> >> +#define TRACING_MAP_KEYS_MAX 2
>> >> +
>> >> +#define TRACING_MAP_SORT_KEYS_MAX 2
>> >> +
>> >> +typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
>> >> +
>> >> +/*
>> >> + * This is an overview of the tracing_map data structures and how they
>> >> + * relate to the tracing_map API. The details of the algorithms
>> >> + * aren't discussed here - this is just a general overview of the data
>> >> + * structures and how they interact with the API.
>> >> + *
>> >> + * The central data structure of the tracing_map is an initially
>> >> + * zeroed array of struct tracing_map_entry (stored in the map field
>> >> + * of struct tracing_map). tracing_map_entry is a very simple data
>> >> + * structure containing only two fields: a 32-bit unsigned 'key'
>> >> + * variable and a pointer named 'val'. This array of struct
>> >> + * tracing_map_entry is essentially a hash table which will be
>> >> + * modified by a single function, tracing_map_insert(), but which can
>> >> + * be traversed and read by a user at any time (though the user does
>> >> + * this indirectly via an array of tracing_map_sort_entry - see the
>> >> + * explanation of that data structure in the discussion of the
>> >> + * sorting-related data structures below).
>> >> + *
>> >> + * The central function of the tracing_map API is
>> >> + * tracing_map_insert(). tracing_map_insert() hashes the
>> >> + * arbitrarily-sized key passed into it into a 32-bit unsigned key.
>> >> + * It then uses this key, truncated to the array size, as an index
>> >> + * into the array of tracing_map_entries. If the value of the 'key'
>> >> + * field of the tracing_map_entry found at that location is 0, then
>> >> + * that entry is considered to be free and can be claimed, by
>> >> + * replacing the 0 in the 'key' field of the tracing_map_entry with
>> >> + * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
>> >> + * 'val' field is then used to store a unique element which will be
>> >> + * forever associated with that 32-bit hashed key in the
>> >> + * tracing_map_entry.
>> >> + *
>> >> + * That unique element now in the tracing_map_entry's 'val' field is
>> >> + * an instance of tracing_map_elt, where 'elt' in the latter part of
>> >> + * that variable name is short for 'element'. The purpose of a
>> >> + * tracing_map_elt is to hold values specific to the the particular
>> >> + * 32-bit hashed key it's assocated with. Things such as the unique
>> >> + * set of aggregated sums associated with the 32-bit hashed key, along
>> >> + * with a copy of the full key associated with the entry, and which
>> >> + * was used to produce the 32-bit hashed key.
>> >> + *
>> >> + * When tracing_map_create() is called to create the tracing map, the
>> >> + * user specifies (indirectly via the map_bits param, the details are
>> >> + * unimportant for this discussion) the maximum number of elements
>> >> + * that the map can hold (stored in the max_elts field of struct
>> >> + * tracing_map). This is the maximum possible number of
>> >> + * tracing_map_entries in the tracing_map_entry array which can be
>> >> + * 'claimed' as described in the above discussion, and therefore is
>> >> + * also the maximum number of tracing_map_elts that can be associated
>> >> + * with the tracing_map_entry array in the tracing_map. Because of
>> >> + * the way the insertion algorithm works, the size of the allocated
>> >> + * tracing_map_entry array is always twice the maximum number of
>> >> + * elements (2 * max_elts). This value is stored in the map_size
>> >> + * field of struct tracing_map.
>> >> + *
>> >> + * Because tracing_map_insert() needs to work from any context,
>> >> + * including from within the memory allocation functions themselves,
>> >> + * both the tracing_map_entry array and a pool of max_elts
>> >> + * tracing_map_elts are pre-allocated before any call is made to
>> >> + * tracing_map_insert().
>> >> + *
>> >> + * The tracing_map_entry array is allocated as a single block by
>> >> + * tracing_map_create().
>> >> + *
>> >> + * Because the tracing_map_elts are much larger objects and can't
>> >> + * generally be allocated together as a single large array without
>> >> + * failure, they're allocated individually, by tracing_map_init().
>> >> + *
>> >> + * The pool of tracing_map_elts are allocated by tracing_map_init()
>> >> + * rather than by tracing_map_create() because at the time
>> >> + * tracing_map_create() is called, there isn't enough information to
>> >> + * create the tracing_map_elts. Specifically,the user first needs to
>> >> + * tell the tracing_map implementation how many fields the
>> >> + * tracing_map_elts contain, and which types of fields they are (key
>> >> + * or sum). The user does this via the tracing_map_add_sum_field()
>> >> + * and tracing_map_add_key_field() functions, following which the user
>> >> + * calls tracing_map_init() to finish up the tracing map setup. The
>> >> + * array holding the pointers which make up the pre-allocated pool of
>> >> + * tracing_map_elts is allocated as a single block and is stored in
>> >> + * the elts field of struct tracing_map.
>> >> + *
>> >> + * There is also a set of structures used for sorting that might
>> >> + * benefit from some minimal explanation.
>> >> + *
>> >> + * struct tracing_map_sort_key is used to drive the sort at any given
>> >> + * time. By 'any given time' we mean that a different
>> >> + * tracing_map_sort_key will be used at different times depending on
>> >> + * whether the sort currently being performed is a primary or a
>> >> + * secondary sort.
>> >> + *
>> >> + * The sort key is very simple, consisting of the field index of the
>> >> + * tracing_map_elt field to sort on (which the user saved when adding
>> >> + * the field), and whether the sort should be done in an ascending or
>> >> + * descending order.
>> >> + *
>> >> + * For the convenience of the sorting code, a tracing_map_sort_entry
>> >> + * is created for each tracing_map_elt, again individually allocated
>> >> + * to avoid failures that might be expected if allocated as a single
>> >> + * large array of struct tracing_map_sort_entry.
>> >> + * tracing_map_sort_entry instances are the objects expected by the
>> >> + * various internal sorting functions, and are also what the user
>> >> + * ultimately receives after calling tracing_map_sort_entries().
>> >> + * Because it doesn't make sense for users to access an unordered and
>> >> + * sparsely populated tracing_map directly, the
>> >> + * tracing_map_sort_entries() function is provided so that users can
>> >> + * retrieve a sorted list of all existing elements. In addition to
>> >> + * the associated tracing_map_elt 'elt' field contained within the
>> >> + * tracing_map_sort_entry, which is the object of interest to the
>> >> + * user, tracing_map_sort_entry objects contain a number of additional
>> >> + * fields which are used for caching and internal purposes and can
>> >> + * safely be ignored.
>> >> +*/
>> >> +
>> >> +struct tracing_map_field {
>> >> + tracing_map_cmp_fn_t cmp_fn;
>> >> + union {
>> >> + atomic64_t sum;
>> >> + unsigned int offset;
>> >> + };
>> >> +};
>> >> +
>> >> +struct tracing_map_elt {
>> >> + struct tracing_map *map;
>> >> + struct tracing_map_field *fields;
>> >> + void *key;
>> >> + void *private_data;
>> >> +};
>> >> +
>> >> +struct tracing_map_entry {
>> >> + u32 key;
>> >> + struct tracing_map_elt *val;
>> >> +};
>> >> +
>> >> +struct tracing_map_sort_key {
>> >> + unsigned int field_idx;
>> >> + bool descending;
>> >> +};
>> >> +
>> >> +struct tracing_map_sort_entry {
>> >> + void *key;
>> >> + struct tracing_map_elt *elt;
>> >> + bool elt_copied;
>> >> + bool dup;
>> >> +};
>> >> +
>> >> +struct tracing_map {
>> >> + unsigned int key_size;
>> >> + unsigned int map_bits;
>> >> + unsigned int map_size;
>> >> + unsigned int max_elts;
>> >> + atomic_t next_elt;
>> >> + struct tracing_map_elt **elts;
>> >> + struct tracing_map_entry *map;
>> >> + struct tracing_map_ops *ops;
>> >> + void *private_data;
>> >> + struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
>> >> + unsigned int n_fields;
>> >> + int key_idx[TRACING_MAP_KEYS_MAX];
>> >> + unsigned int n_keys;
>> >> + struct tracing_map_sort_key sort_key;
>> >> +};
>> >> +
>> >> +/**
>> >> + * struct tracing_map_ops - callbacks for tracing_map
>> >> + *
>> >> + * The methods in this structure define callback functions for various
>> >> + * operations on a tracing_map or objects related to a tracing_map.
>> >> + *
>> >> + * For a detailed description of tracing_map_elt objects please see
>> >> + * the overview of tracing_map data structures at the beginning of
>> >> + * this file.
>> >> + *
>> >> + * All the methods below are optional.
>> >> + *
>> >> + * @elt_alloc: When a tracing_map_elt is allocated, this function, if
>> >> + * defined, will be called and gives clients the opportunity to
>> >> + * allocate additional data and attach it to the element
>> >> + * (tracing_map_elt->private_data is meant for that purpose).
>> >> + * Element allocation occurs before tracing begins, when the
>> >> + * tracing_map_init() call is made by client code.
>> >> + *
>> >> + * @elt_copy: At certain points in the lifetime of an element, it may
>> >> + * need to be copied. The copy should include a copy of the
>> >> + * client-allocated data, which can be copied into the 'to'
>> >> + * element from the 'from' element.
>> >> + *
>> >> + * @elt_free: When a tracing_map_elt is freed, this function is called
>> >> + * and allows client-allocated per-element data to be freed.
>> >> + *
>> >> + * @elt_clear: This callback allows per-element client-defined data to
>> >> + * be cleared, if applicable.
>> >> + *
>> >> + * @elt_init: This callback allows per-element client-defined data to
>> >> + * be initialized when used i.e. when the element is actually
>> >> + * claimed by tracing_map_insert() in the context of the map
>> >> + * insertion.
>> >> + */
>> >> +struct tracing_map_ops {
>> >> + int (*elt_alloc)(struct tracing_map_elt *elt);
>> >> + void (*elt_copy)(struct tracing_map_elt *to,
>> >> + struct tracing_map_elt *from);
>> >> + void (*elt_free)(struct tracing_map_elt *elt);
>> >> + void (*elt_clear)(struct tracing_map_elt *elt);
>> >> + void (*elt_init)(struct tracing_map_elt *elt);
>> >> +};
>> >> +
>> >> +extern struct tracing_map *tracing_map_create(unsigned int map_bits,
>> >> + unsigned int key_size,
>> >> + struct tracing_map_ops *ops,
>> >> + void *private_data);
>> >> +extern int tracing_map_init(struct tracing_map *map);
>> >> +
>> >> +extern int tracing_map_add_sum_field(struct tracing_map *map);
>> >> +extern int tracing_map_add_key_field(struct tracing_map *map,
>> >> + unsigned int offset,
>> >> + tracing_map_cmp_fn_t cmp_fn);
>> >> +
>> >> +extern void tracing_map_destroy(struct tracing_map *map);
>> >> +extern void tracing_map_clear(struct tracing_map *map);
>> >> +
>> >> +extern struct tracing_map_elt *
>> >> +tracing_map_insert(struct tracing_map *map, void *key);
>> >> +
>> >> +extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
>> >> + int field_is_signed);
>> >> +extern int tracing_map_cmp_string(void *val_a, void *val_b);
>> >> +extern int tracing_map_cmp_none(void *val_a, void *val_b);
>> >> +
>> >> +extern void tracing_map_update_sum(struct tracing_map_elt *elt,
>> >> + unsigned int i, u64 n);
>> >> +extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
>> >> +extern void tracing_map_set_field_descr(struct tracing_map *map,
>> >> + unsigned int i,
>> >> + unsigned int key_offset,
>> >> + tracing_map_cmp_fn_t cmp_fn);
>> >> +extern int
>> >> +tracing_map_sort_entries(struct tracing_map *map,
>> >> + struct tracing_map_sort_key *sort_keys,
>> >> + unsigned int n_sort_keys,
>> >> + struct tracing_map_sort_entry ***sort_entries);
>> >> +
>> >> +extern void
>> >> +tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
>> >> + unsigned int n_entries);
>> >> +#endif /* __TRACING_MAP_H */
>> >> --
>> >> 1.9.3
>> >>
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> >> the body of a message to [email protected]
>> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> >> Please read the FAQ at http://www.tux.org/lkml/
>> >
>> > --
>> > Mathieu Desnoyers
>> > EfficiOS Inc.
>> > http://www.efficios.com

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2015-07-17 23:44:57

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Fri, 2015-07-17 at 15:48 +0000, Mathieu Desnoyers wrote:
> ----- On Jul 16, 2015, at 9:35 PM, Tom Zanussi [email protected] wrote:
>
> > Hi Mathieu,
> >
> > On Thu, 2015-07-16 at 23:25 +0000, Mathieu Desnoyers wrote:
> >> * Tom Zanussi wrote:
> >> >> Add tracing_map, a special-purpose lock-free map for tracing.
> >> >>
> >> >> tracing_map is designed to aggregate or 'sum' one or more values
> >> >> associated with a specific object of type tracing_map_elt, which
> >> >> is associated by the map to a given key.
> >> >>
> >> >> It provides various hooks allowing per-tracer customization and is
> >> >> separated out into a separate file in order to allow it to be shared
> >> >> between multiple tracers, but isn't meant to be generally used outside
> >> >> of that context.
> >> >>
> >> >> The tracing_map implementation was inspired by lock-free map
> >> >> algorithms originated by Dr. Cliff Click:
> >> >>
> >> >> http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
> >> >> http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
> >>
> >> Hi Tom,
> >>
> >> First question: what is the rationale for implementing another
> >> hash table from scratch here ? What is missing in the pre-existing
> >> hash table implementations ?
> >>
> >
> > None of the other hash tables allow for lock-free insertion (and I
> > didn't see an easy way to add it).
>
> This is one of the nice things about the Userspace RCU lock-free hash
> table we've done a few years ago: it provides lock-free add, add_unique,
> removal, and replace, as well as RCU wait-free lookups and traversals.
> Resize can be done concurrently by a worker thread. I ported it to the
> Linux kernel for Julien's work on latency tracker. You can find the
> implementation here: https://github.com/jdesfossez/latency_tracker
> (see rculfhash*)
> It is a simplified version that has the "resize" feature removed for
> simplicity sake. The "insert and lookup" feature you need is called
> "add_unique" in our API: it behaves both as a lookup and as an atomic
> insert if the key is not found.
>

Interesting, but it's just as much not upstream as mine is. ;-)

>From the perspective of the hist triggers, it doesn't matter what hash
table implementation it uses as long as whatever it is supports
insertion in any context. In fact the current tracing_map
implementation is already the second completely different implementation
it's plugged into (see v2 of this patchset for the first). If yours is
better and going upstream, I'd be happy to make it the third and forget
about mine.

Tom

> Thanks,
>
> Mathieu
>
> >
> >> Moreover, you might want to handle the case where jhash() returns
> >> 0. AFAIU, there is a race on "insert" in this scenario.
> >>
> >
> > You're right, in that case you'd accidentally overwrite an already
> > claimed slot. Thanks for pointing that out.
> >
> > Tom
> >
> >> Thanks,
> >>
> >> Mathieu
> >>
> >> >>
> >> >> Signed-off-by: Tom Zanussi <[email protected]>
> >> >> ---
> >> >> kernel/trace/Makefile | 1 +
> >> >> kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
> >> >> kernel/trace/tracing_map.h | 258 +++++++++++++
> >> >> 3 files changed, 1194 insertions(+)
> >> >> create mode 100644 kernel/trace/tracing_map.c
> >> >> create mode 100644 kernel/trace/tracing_map.h
> >> >>
> >> >> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
> >> >> index 9b1044e..3b26cfb 100644
> >> >> --- a/kernel/trace/Makefile
> >> >> +++ b/kernel/trace/Makefile
> >> >> @@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
> >> >> obj-$(CONFIG_TRACING) += trace_seq.o
> >> >> obj-$(CONFIG_TRACING) += trace_stat.o
> >> >> obj-$(CONFIG_TRACING) += trace_printk.o
> >> >> +obj-$(CONFIG_TRACING) += tracing_map.o
> >> >> obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
> >> >> obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
> >> >> obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
> >> >> diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
> >> >> new file mode 100644
> >> >> index 0000000..a505025
> >> >> --- /dev/null
> >> >> +++ b/kernel/trace/tracing_map.c
> >> >> @@ -0,0 +1,935 @@
> >> >> +/*
> >> >> + * tracing_map - lock-free map for tracing
> >> >> + *
> >> >> + * This program is free software; you can redistribute it and/or modify
> >> >> + * it under the terms of the GNU General Public License as published by
> >> >> + * the Free Software Foundation; either version 2 of the License, or
> >> >> + * (at your option) any later version.
> >> >> + *
> >> >> + * This program is distributed in the hope that it will be useful,
> >> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> >> + * GNU General Public License for more details.
> >> >> + *
> >> >> + * Copyright (C) 2015 Tom Zanussi <[email protected]>
> >> >> + *
> >> >> + * tracing_map implementation inspired by lock-free map algorithms
> >> >> + * originated by Dr. Cliff Click:
> >> >> + *
> >> >> + * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
> >> >> + * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
> >> >> + */
> >> >> +
> >> >> +#include <linux/slab.h>
> >> >> +#include <linux/jhash.h>
> >> >> +#include <linux/sort.h>
> >> >> +
> >> >> +#include "tracing_map.h"
> >> >> +#include "trace.h"
> >> >> +
> >> >> +/*
> >> >> + * NOTE: For a detailed description of the data structures used by
> >> >> + * these functions (such as tracing_map_elt) please see the overview
> >> >> + * of tracing_map data structures at the beginning of tracing_map.h.
> >> >> + */
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
> >> >> + * @elt: The tracing_map_elt
> >> >> + * @i: The index of the given sum associated with the tracing_map_elt
> >> >> + * @n: The value to add to the sum
> >> >> + *
> >> >> + * Add n to sum i associated with the specified tracing_map_elt
> >> >> + * instance. The index i is the index returned by the call to
> >> >> + * tracing_map_add_sum_field() when the tracing map was set up.
> >> >> + */
> >> >> +void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
> >> >> +{
> >> >> + atomic64_add(n, &elt->fields[i].sum);
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
> >> >> + * @elt: The tracing_map_elt
> >> >> + * @i: The index of the given sum associated with the tracing_map_elt
> >> >> + *
> >> >> + * Retrieve the value of the sum i associated with the specified
> >> >> + * tracing_map_elt instance. The index i is the index returned by the
> >> >> + * call to tracing_map_add_sum_field() when the tracing map was set
> >> >> + * up.
> >> >> + *
> >> >> + * Return: The sum associated with field i for elt.
> >> >> + */
> >> >> +u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
> >> >> +{
> >> >> + return (u64)atomic64_read(&elt->fields[i].sum);
> >> >> +}
> >> >> +
> >> >> +int tracing_map_cmp_string(void *val_a, void *val_b)
> >> >> +{
> >> >> + char *a = val_a;
> >> >> + char *b = val_b;
> >> >> +
> >> >> + return strcmp(a, b);
> >> >> +}
> >> >> +
> >> >> +int tracing_map_cmp_none(void *val_a, void *val_b)
> >> >> +{
> >> >> + return 0;
> >> >> +}
> >> >> +
> >> >> +static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
> >> >> +{
> >> >> + u64 a = atomic64_read((atomic64_t *)val_a);
> >> >> + u64 b = atomic64_read((atomic64_t *)val_b);
> >> >> +
> >> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0);
> >> >> +}
> >> >> +
> >> >> +#define DEFINE_TRACING_MAP_CMP_FN(type) \
> >> >> +static int tracing_map_cmp_##type(void *val_a, void *val_b) \
> >> >> +{ \
> >> >> + type a = *(type *)val_a; \
> >> >> + type b = *(type *)val_b; \
> >> >> + \
> >> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0); \
> >> >> +}
> >> >> +
> >> >> +DEFINE_TRACING_MAP_CMP_FN(s64);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(u64);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(s32);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(u32);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(s16);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(u16);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(s8);
> >> >> +DEFINE_TRACING_MAP_CMP_FN(u8);
> >> >> +
> >> >> +tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
> >> >> + int field_is_signed)
> >> >> +{
> >> >> + tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
> >> >> +
> >> >> + switch (field_size) {
> >> >> + case 8:
> >> >> + if (field_is_signed)
> >> >> + fn = tracing_map_cmp_s64;
> >> >> + else
> >> >> + fn = tracing_map_cmp_u64;
> >> >> + break;
> >> >> + case 4:
> >> >> + if (field_is_signed)
> >> >> + fn = tracing_map_cmp_s32;
> >> >> + else
> >> >> + fn = tracing_map_cmp_u32;
> >> >> + break;
> >> >> + case 2:
> >> >> + if (field_is_signed)
> >> >> + fn = tracing_map_cmp_s16;
> >> >> + else
> >> >> + fn = tracing_map_cmp_u16;
> >> >> + break;
> >> >> + case 1:
> >> >> + if (field_is_signed)
> >> >> + fn = tracing_map_cmp_s8;
> >> >> + else
> >> >> + fn = tracing_map_cmp_u8;
> >> >> + break;
> >> >> + }
> >> >> +
> >> >> + return fn;
> >> >> +}
> >> >> +
> >> >> +static int tracing_map_add_field(struct tracing_map *map,
> >> >> + tracing_map_cmp_fn_t cmp_fn)
> >> >> +{
> >> >> + int ret = -EINVAL;
> >> >> +
> >> >> + if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
> >> >> + ret = map->n_fields;
> >> >> + map->fields[map->n_fields++].cmp_fn = cmp_fn;
> >> >> + }
> >> >> +
> >> >> + return ret;
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_add_sum_field - Add a field describing a tracing_map sum
> >> >> + * @map: The tracing_map
> >> >> + *
> >> >> + * Add a sum field to the key and return the index identifying it in
> >> >> + * the map and associated tracing_map_elts. This is the index used
> >> >> + * for instance to update a sum for a particular tracing_map_elt using
> >> >> + * tracing_map_update_sum() or reading it via tracing_map_read_sum().
> >> >> + *
> >> >> + * Return: The index identifying the field in the map and associated
> >> >> + * tracing_map_elts.
> >> >> + */
> >> >> +int tracing_map_add_sum_field(struct tracing_map *map)
> >> >> +{
> >> >> + return tracing_map_add_field(map, tracing_map_cmp_atomic64);
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_add_key_field - Add a field describing a tracing_map key
> >> >> + * @map: The tracing_map
> >> >> + * @offset: The offset within the key
> >> >> + * @cmp_fn: The comparison function that will be used to sort on the key
> >> >> + *
> >> >> + * Let the map know there is a key and that if it's used as a sort key
> >> >> + * to use cmp_fn.
> >> >> + *
> >> >> + * A key can be a subset of a compound key; for that purpose, the
> >> >> + * offset param is used to describe where within the the compound key
> >> >> + * the key referenced by this key field resides.
> >> >> + *
> >> >> + * Return: The index identifying the field in the map and associated
> >> >> + * tracing_map_elts.
> >> >> + */
> >> >> +int tracing_map_add_key_field(struct tracing_map *map,
> >> >> + unsigned int offset,
> >> >> + tracing_map_cmp_fn_t cmp_fn)
> >> >> +
> >> >> +{
> >> >> + int idx = tracing_map_add_field(map, cmp_fn);
> >> >> +
> >> >> + if (idx < 0)
> >> >> + return idx;
> >> >> +
> >> >> + map->fields[idx].offset = offset;
> >> >> +
> >> >> + map->key_idx[map->n_keys++] = idx;
> >> >> +
> >> >> + return idx;
> >> >> +}
> >> >> +
> >> >> +static void tracing_map_elt_clear(struct tracing_map_elt *elt)
> >> >> +{
> >> >> + unsigned i;
> >> >> +
> >> >> + for (i = 0; i < elt->map->n_fields; i++)
> >> >> + if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
> >> >> + atomic64_set(&elt->fields[i].sum, 0);
> >> >> +
> >> >> + if (elt->map->ops && elt->map->ops->elt_clear)
> >> >> + elt->map->ops->elt_clear(elt);
> >> >> +}
> >> >> +
> >> >> +static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
> >> >> +{
> >> >> + unsigned int i;
> >> >> +
> >> >> + tracing_map_elt_clear(elt);
> >> >> +
> >> >> + for (i = 0; i < elt->map->n_fields; i++) {
> >> >> + elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
> >> >> +
> >> >> + if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
> >> >> + elt->fields[i].offset = elt->map->fields[i].offset;
> >> >> + }
> >> >> +}
> >> >> +
> >> >> +static void tracing_map_elt_free(struct tracing_map_elt *elt)
> >> >> +{
> >> >> + if (!elt)
> >> >> + return;
> >> >> +
> >> >> + if (elt->map->ops && elt->map->ops->elt_free)
> >> >> + elt->map->ops->elt_free(elt);
> >> >> + kfree(elt->fields);
> >> >> + kfree(elt->key);
> >> >> + kfree(elt);
> >> >> +}
> >> >> +
> >> >> +static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
> >> >> +{
> >> >> + struct tracing_map_elt *elt;
> >> >> + int err = 0;
> >> >> +
> >> >> + elt = kzalloc(sizeof(*elt), GFP_KERNEL);
> >> >> + if (!elt)
> >> >> + return ERR_PTR(-ENOMEM);
> >> >> +
> >> >> + elt->map = map;
> >> >> +
> >> >> + elt->key = kzalloc(map->key_size, GFP_KERNEL);
> >> >> + if (!elt->key) {
> >> >> + err = -ENOMEM;
> >> >> + goto free;
> >> >> + }
> >> >> +
> >> >> + elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
> >> >> + if (!elt->fields) {
> >> >> + err = -ENOMEM;
> >> >> + goto free;
> >> >> + }
> >> >> +
> >> >> + tracing_map_elt_init_fields(elt);
> >> >> +
> >> >> + if (map->ops && map->ops->elt_alloc) {
> >> >> + err = map->ops->elt_alloc(elt);
> >> >> + if (err)
> >> >> + goto free;
> >> >> + }
> >> >> + return elt;
> >> >> + free:
> >> >> + tracing_map_elt_free(elt);
> >> >> +
> >> >> + return ERR_PTR(err);
> >> >> +}
> >> >> +
> >> >> +static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
> >> >> +{
> >> >> + struct tracing_map_elt *elt = NULL;
> >> >> + int idx;
> >> >> +
> >> >> + idx = atomic_inc_return(&map->next_elt);
> >> >> + if (idx < map->max_elts) {
> >> >> + elt = map->elts[idx];
> >> >> + if (map->ops && map->ops->elt_init)
> >> >> + map->ops->elt_init(elt);
> >> >> + }
> >> >> +
> >> >> + return elt;
> >> >> +}
> >> >> +
> >> >> +static void tracing_map_free_elts(struct tracing_map *map)
> >> >> +{
> >> >> + unsigned int i;
> >> >> +
> >> >> + if (!map->elts)
> >> >> + return;
> >> >> +
> >> >> + for (i = 0; i < map->max_elts; i++)
> >> >> + tracing_map_elt_free(map->elts[i]);
> >> >> +
> >> >> + kfree(map->elts);
> >> >> +}
> >> >> +
> >> >> +static int tracing_map_alloc_elts(struct tracing_map *map)
> >> >> +{
> >> >> + unsigned int i;
> >> >> +
> >> >> + map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
> >> >> + GFP_KERNEL);
> >> >> + if (!map->elts)
> >> >> + return -ENOMEM;
> >> >> +
> >> >> + for (i = 0; i < map->max_elts; i++) {
> >> >> + map->elts[i] = tracing_map_elt_alloc(map);
> >> >> + if (!map->elts[i]) {
> >> >> + tracing_map_free_elts(map);
> >> >> +
> >> >> + return -ENOMEM;
> >> >> + }
> >> >> + }
> >> >> +
> >> >> + return 0;
> >> >> +}
> >> >> +
> >> >> +static inline bool keys_match(void *key, void *test_key, unsigned key_size)
> >> >> +{
> >> >> + bool match = true;
> >> >> +
> >> >> + if (memcmp(key, test_key, key_size))
> >> >> + match = false;
> >> >> +
> >> >> + return match;
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
> >> >> + * @map: The tracing_map to insert into
> >> >> + * @key: The key to insert
> >> >> + *
> >> >> + * Inserts a key into a tracing_map and creates and returns a new
> >> >> + * tracing_map_elt for it, or if the key has already been inserted by
> >> >> + * a previous call, returns the tracing_map_elt already associated
> >> >> + * with it. When the map was created, the number of elements to be
> >> >> + * allocated for the map was specified (internally maintained as
> >> >> + * 'max_elts' in struct tracing_map), and that number of
> >> >> + * tracing_map_elts was created by tracing_map_init(). This is the
> >> >> + * pre-allocated pool of tracing_map_elts that tracing_map_insert()
> >> >> + * will allocate from when adding new keys. Once that pool is
> >> >> + * exhausted, tracing_map_insert() is useless and will return NULL to
> >> >> + * signal that state.
> >> >> + *
> >> >> + * This is a lock-free tracing map insertion function implementing a
> >> >> + * modified form of Cliff Click's basic insertion algorithm. It
> >> >> + * requires the table size be a power of two. To prevent any
> >> >> + * possibility of an infinite loop we always make the internal table
> >> >> + * size double the size of the requested table size (max_elts * 2).
> >> >> + * Likewise, we never reuse a slot or resize or delete elements - when
> >> >> + * we've reached max_elts entries, we simply return NULL once we've
> >> >> + * run out of entries. Readers can at any point in time traverse the
> >> >> + * tracing map and safely access the key/val pairs.
> >> >> + *
> >> >> + * Return: the tracing_map_elt pointer val associated with the key.
> >> >> + * If this was a newly inserted key, the val will be a newly allocated
> >> >> + * and associated tracing_map_elt pointer val. If the key wasn't
> >> >> + * found and the pool of tracing_map_elts has been exhausted, NULL is
> >> >> + * returned and no further insertions will succeed.
> >> >> + */
> >> >> +struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
> >> >> +{
> >> >> + u32 idx, key_hash, test_key;
> >> >> +
> >> >> + key_hash = jhash(key, map->key_size, 0);
> >> >> + idx = key_hash >> (32 - (map->map_bits + 1));
> >> >> +
> >> >> + while (1) {
> >> >> + idx &= (map->map_size - 1);
> >> >> + test_key = map->map[idx].key;
> >> >> +
> >> >> + if (test_key && test_key == key_hash && map->map[idx].val &&
> >> >> + keys_match(key, map->map[idx].val->key, map->key_size))
> >> >> + return map->map[idx].val;
> >> >> +
> >> >> + if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
> >> >> + struct tracing_map_elt *elt;
> >> >> +
> >> >> + elt = get_free_elt(map);
> >> >> + if (!elt)
> >> >> + break;
> >> >> + memcpy(elt->key, key, map->key_size);
> >> >> + map->map[idx].val = elt;
> >> >> +
> >> >> + return map->map[idx].val;
> >> >> + }
> >> >> + idx++;
> >> >> + }
> >> >> +
> >> >> + return NULL;
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_destroy - Destroy a tracing_map
> >> >> + * @map: The tracing_map to destroy
> >> >> + *
> >> >> + * Frees a tracing_map along with its associated array of
> >> >> + * tracing_map_elts.
> >> >> + *
> >> >> + * Callers should make sure there are no readers or writers actively
> >> >> + * reading or inserting into the map before calling this.
> >> >> + */
> >> >> +void tracing_map_destroy(struct tracing_map *map)
> >> >> +{
> >> >> + if (!map)
> >> >> + return;
> >> >> +
> >> >> + tracing_map_free_elts(map);
> >> >> +
> >> >> + kfree(map->map);
> >> >> + kfree(map);
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_clear - Clear a tracing_map
> >> >> + * @map: The tracing_map to clear
> >> >> + *
> >> >> + * Resets the tracing map to a cleared or initial state. The
> >> >> + * tracing_map_elts are all cleared, and the array of struct
> >> >> + * tracing_map_entry is reset to an initialized state.
> >> >> + *
> >> >> + * Callers should make sure there are no writers actively inserting
> >> >> + * into the map before calling this.
> >> >> + */
> >> >> +void tracing_map_clear(struct tracing_map *map)
> >> >> +{
> >> >> + unsigned int i, size;
> >> >> +
> >> >> + atomic_set(&map->next_elt, -1);
> >> >> +
> >> >> + size = map->map_size * sizeof(struct tracing_map_entry);
> >> >> + memset(map->map, 0, size);
> >> >> +
> >> >> + for (i = 0; i < map->max_elts; i++)
> >> >> + tracing_map_elt_clear(map->elts[i]);
> >> >> +}
> >> >> +
> >> >> +static void set_sort_key(struct tracing_map *map,
> >> >> + struct tracing_map_sort_key *sort_key)
> >> >> +{
> >> >> + map->sort_key = *sort_key;
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_create - Create a lock-free map and element pool
> >> >> + * @map_bits: The size of the map (2 ** map_bits)
> >> >> + * @key_size: The size of the key for the map in bytes
> >> >> + * @ops: Optional client-defined tracing_map_ops instance
> >> >> + * @private_data: Client data associated with the map
> >> >> + *
> >> >> + * Creates and sets up a map to contain 2 ** map_bits number of
> >> >> + * elements (internally maintained as 'max_elts' in struct
> >> >> + * tracing_map). Before using, map fields should be added to the map
> >> >> + * with tracing_map_add_sum_field() and tracing_map_add_key_field().
> >> >> + * tracing_map_init() should then be called to allocate the array of
> >> >> + * tracing_map_elts, in order to avoid allocating anything in the map
> >> >> + * insertion path. The user-specified map size reflects the maximum
> >> >> + * number of elements that can be contained in the table requested by
> >> >> + * the user - internally we double that in order to keep the table
> >> >> + * sparse and keep collisions manageable.
> >> >> + *
> >> >> + * A tracing_map is a special-purpose map designed to aggregate or
> >> >> + * 'sum' one or more values associated with a specific object of type
> >> >> + * tracing_map_elt, which is attached by the map to a given key.
> >> >> + *
> >> >> + * tracing_map_create() sets up the map itself, and provides
> >> >> + * operations for inserting tracing_map_elts, but doesn't allocate the
> >> >> + * tracing_map_elts themselves, or provide a means for describing the
> >> >> + * keys or sums associated with the tracing_map_elts. All
> >> >> + * tracing_map_elts for a given map have the same set of sums and
> >> >> + * keys, which are defined by the client using the functions
> >> >> + * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
> >> >> + * the fields are defined, the pool of elements allocated for the map
> >> >> + * can be created, which occurs when the client code calls
> >> >> + * tracing_map_init().
> >> >> + *
> >> >> + * When tracing_map_init() returns, tracing_map_elt elements can be
> >> >> + * inserted into the map using tracing_map_insert(). When called,
> >> >> + * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
> >> >> + * finds an existing match in the map and in either case returns it.
> >> >> + * The client can then use tracing_map_update_sum() and
> >> >> + * tracing_map_read_sum() to update or read a given sum field for the
> >> >> + * tracing_map_elt.
> >> >> + *
> >> >> + * The client can at any point retrieve and traverse the current set
> >> >> + * of inserted tracing_map_elts in a tracing_map, via
> >> >> + * tracing_map_sort_entries(). Sorting can be done on any field,
> >> >> + * including keys.
> >> >> + *
> >> >> + * See tracing_map.h for a description of tracing_map_ops.
> >> >> + *
> >> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
> >> >> + */
> >> >> +struct tracing_map *tracing_map_create(unsigned int map_bits,
> >> >> + unsigned int key_size,
> >> >> + struct tracing_map_ops *ops,
> >> >> + void *private_data)
> >> >> +{
> >> >> + struct tracing_map *map;
> >> >> + unsigned int i;
> >> >> +
> >> >> + if (map_bits < TRACING_MAP_BITS_MIN ||
> >> >> + map_bits > TRACING_MAP_BITS_MAX)
> >> >> + return ERR_PTR(-EINVAL);
> >> >> +
> >> >> + map = kzalloc(sizeof(*map), GFP_KERNEL);
> >> >> + if (!map)
> >> >> + return ERR_PTR(-ENOMEM);
> >> >> +
> >> >> + map->map_bits = map_bits;
> >> >> + map->max_elts = (1 << map_bits);
> >> >> + atomic_set(&map->next_elt, -1);
> >> >> +
> >> >> + map->map_size = (1 << (map_bits + 1));
> >> >> + map->ops = ops;
> >> >> +
> >> >> + map->private_data = private_data;
> >> >> +
> >> >> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
> >> >> + GFP_KERNEL);
> >> >> + if (!map->map)
> >> >> + goto free;
> >> >> +
> >> >> + map->key_size = key_size;
> >> >> + for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
> >> >> + map->key_idx[i] = -1;
> >> >> + out:
> >> >> + return map;
> >> >> + free:
> >> >> + tracing_map_destroy(map);
> >> >> + map = ERR_PTR(-ENOMEM);
> >> >> +
> >> >> + goto out;
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_init - Allocate and clear a map's tracing_map_elts
> >> >> + * @map: The tracing_map to initialize
> >> >> + *
> >> >> + * Allocates a clears a pool of tracing_map_elts equal to the
> >> >> + * user-specified size of 2 ** map_bits (internally maintained as
> >> >> + * 'max_elts' in struct tracing_map). Before using, the map fields
> >> >> + * should be added to the map with tracing_map_add_sum_field() and
> >> >> + * tracing_map_add_key_field(). tracing_map_init() should then be
> >> >> + * called to allocate the array of tracing_map_elts, in order to avoid
> >> >> + * allocating anything in the map insertion path. The user-specified
> >> >> + * map size reflects the max number of elements requested by the user
> >> >> + * - internally we double that in order to keep the table sparse and
> >> >> + * keep collisions manageable.
> >> >> + *
> >> >> + * See tracing_map.h for a description of tracing_map_ops.
> >> >> + *
> >> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
> >> >> + */
> >> >> +int tracing_map_init(struct tracing_map *map)
> >> >> +{
> >> >> + int err;
> >> >> +
> >> >> + if (map->n_fields < 2)
> >> >> + return -EINVAL; /* need at least 1 key and 1 val */
> >> >> +
> >> >> + err = tracing_map_alloc_elts(map);
> >> >> + if (err)
> >> >> + return err;
> >> >> +
> >> >> + tracing_map_clear(map);
> >> >> +
> >> >> + return err;
> >> >> +}
> >> >> +
> >> >> +static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
> >> >> + const struct tracing_map_sort_entry **b)
> >> >> +{
> >> >> + int ret = 0;
> >> >> +
> >> >> + if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
> >> >> + ret = 1;
> >> >> +
> >> >> + return ret;
> >> >> +}
> >> >> +
> >> >> +static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
> >> >> + const struct tracing_map_sort_entry **b)
> >> >> +{
> >> >> + const struct tracing_map_elt *elt_a, *elt_b;
> >> >> + struct tracing_map_sort_key *sort_key;
> >> >> + struct tracing_map_field *field;
> >> >> + tracing_map_cmp_fn_t cmp_fn;
> >> >> + void *val_a, *val_b;
> >> >> + int ret = 0;
> >> >> +
> >> >> + elt_a = (*a)->elt;
> >> >> + elt_b = (*b)->elt;
> >> >> +
> >> >> + sort_key = &elt_a->map->sort_key;
> >> >> +
> >> >> + field = &elt_a->fields[sort_key->field_idx];
> >> >> + cmp_fn = field->cmp_fn;
> >> >> +
> >> >> + val_a = &elt_a->fields[sort_key->field_idx].sum;
> >> >> + val_b = &elt_b->fields[sort_key->field_idx].sum;
> >> >> +
> >> >> + ret = cmp_fn(val_a, val_b);
> >> >> + if (sort_key->descending)
> >> >> + ret = -ret;
> >> >> +
> >> >> + return ret;
> >> >> +}
> >> >> +
> >> >> +static int cmp_entries_key(const struct tracing_map_sort_entry **a,
> >> >> + const struct tracing_map_sort_entry **b)
> >> >> +{
> >> >> + const struct tracing_map_elt *elt_a, *elt_b;
> >> >> + struct tracing_map_sort_key *sort_key;
> >> >> + struct tracing_map_field *field;
> >> >> + tracing_map_cmp_fn_t cmp_fn;
> >> >> + void *val_a, *val_b;
> >> >> + int ret = 0;
> >> >> +
> >> >> + elt_a = (*a)->elt;
> >> >> + elt_b = (*b)->elt;
> >> >> +
> >> >> + sort_key = &elt_a->map->sort_key;
> >> >> +
> >> >> + field = &elt_a->fields[sort_key->field_idx];
> >> >> +
> >> >> + cmp_fn = field->cmp_fn;
> >> >> +
> >> >> + val_a = elt_a->key + field->offset;
> >> >> + val_b = elt_b->key + field->offset;
> >> >> +
> >> >> + ret = cmp_fn(val_a, val_b);
> >> >> + if (sort_key->descending)
> >> >> + ret = -ret;
> >> >> +
> >> >> + return ret;
> >> >> +}
> >> >> +
> >> >> +static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
> >> >> +{
> >> >> + if (!entry)
> >> >> + return;
> >> >> +
> >> >> + if (entry->elt_copied)
> >> >> + tracing_map_elt_free(entry->elt);
> >> >> +
> >> >> + kfree(entry);
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
> >> >> + * @entries: The entries to destroy
> >> >> + * @n_entries: The number of entries in the array
> >> >> + *
> >> >> + * Destroy the elements returned by a tracing_map_sort_entries() call.
> >> >> + */
> >> >> +void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
> >> >> + unsigned int n_entries)
> >> >> +{
> >> >> + unsigned int i;
> >> >> +
> >> >> + for (i = 0; i < n_entries; i++)
> >> >> + destroy_sort_entry(entries[i]);
> >> >> +}
> >> >> +
> >> >> +static struct tracing_map_sort_entry *
> >> >> +create_sort_entry(void *key, struct tracing_map_elt *elt)
> >> >> +{
> >> >> + struct tracing_map_sort_entry *sort_entry;
> >> >> +
> >> >> + sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
> >> >> + if (!sort_entry)
> >> >> + return NULL;
> >> >> +
> >> >> + sort_entry->key = key;
> >> >> + sort_entry->elt = elt;
> >> >> +
> >> >> + return sort_entry;
> >> >> +}
> >> >> +
> >> >> +static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
> >> >> +{
> >> >> + struct tracing_map_elt *dup_elt;
> >> >> + unsigned int i;
> >> >> +
> >> >> + dup_elt = tracing_map_elt_alloc(elt->map);
> >> >> + if (!dup_elt)
> >> >> + return NULL;
> >> >> +
> >> >> + if (elt->map->ops && elt->map->ops->elt_copy)
> >> >> + elt->map->ops->elt_copy(dup_elt, elt);
> >> >> +
> >> >> + dup_elt->private_data = elt->private_data;
> >> >> + memcpy(dup_elt->key, elt->key, elt->map->key_size);
> >> >> +
> >> >> + for (i = 0; i < elt->map->n_fields; i++) {
> >> >> + atomic64_set(&dup_elt->fields[i].sum,
> >> >> + atomic64_read(&elt->fields[i].sum));
> >> >> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> >> >> + }
> >> >> +
> >> >> + return dup_elt;
> >> >> +}
> >> >> +
> >> >> +static int merge_dup(struct tracing_map_sort_entry **sort_entries,
> >> >> + unsigned int target, unsigned int dup)
> >> >> +{
> >> >> + struct tracing_map_elt *target_elt, *elt;
> >> >> + bool first_dup = (target - dup) == 1;
> >> >> + int i;
> >> >> +
> >> >> + if (first_dup) {
> >> >> + elt = sort_entries[target]->elt;
> >> >> + target_elt = copy_elt(elt);
> >> >> + if (!target_elt)
> >> >> + return -ENOMEM;
> >> >> + sort_entries[target]->elt = target_elt;
> >> >> + sort_entries[target]->elt_copied = true;
> >> >> + } else
> >> >> + target_elt = sort_entries[target]->elt;
> >> >> +
> >> >> + elt = sort_entries[dup]->elt;
> >> >> +
> >> >> + for (i = 0; i < elt->map->n_fields; i++)
> >> >> + atomic64_add(atomic64_read(&elt->fields[i].sum),
> >> >> + &target_elt->fields[i].sum);
> >> >> +
> >> >> + sort_entries[dup]->dup = true;
> >> >> +
> >> >> + return 0;
> >> >> +}
> >> >> +
> >> >> +static int merge_dups(struct tracing_map_sort_entry **sort_entries,
> >> >> + int n_entries, unsigned int key_size)
> >> >> +{
> >> >> + unsigned int dups = 0, total_dups = 0;
> >> >> + int err, i, j;
> >> >> + void *key;
> >> >> +
> >> >> + if (n_entries < 2)
> >> >> + return total_dups;
> >> >> +
> >> >> + sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
> >> >> + (int (*)(const void *, const void *))cmp_entries_dup, NULL);
> >> >> +
> >> >> + key = sort_entries[0]->key;
> >> >> + for (i = 1; i < n_entries; i++) {
> >> >> + if (!memcmp(sort_entries[i]->key, key, key_size)) {
> >> >> + dups++; total_dups++;
> >> >> + err = merge_dup(sort_entries, i - dups, i);
> >> >> + if (err)
> >> >> + return err;
> >> >> + continue;
> >> >> + }
> >> >> + key = sort_entries[i]->key;
> >> >> + dups = 0;
> >> >> + }
> >> >> +
> >> >> + if (!total_dups)
> >> >> + return total_dups;
> >> >> +
> >> >> + for (i = 0, j = 0; i < n_entries; i++) {
> >> >> + if (!sort_entries[i]->dup) {
> >> >> + sort_entries[j] = sort_entries[i];
> >> >> + if (j++ != i)
> >> >> + sort_entries[i] = NULL;
> >> >> + } else {
> >> >> + destroy_sort_entry(sort_entries[i]);
> >> >> + sort_entries[i] = NULL;
> >> >> + }
> >> >> + }
> >> >> +
> >> >> + return total_dups;
> >> >> +}
> >> >> +
> >> >> +static bool is_key(struct tracing_map *map, unsigned int field_idx)
> >> >> +{
> >> >> + unsigned int i;
> >> >> +
> >> >> + for (i = 0; i < map->n_keys; i++)
> >> >> + if (map->key_idx[i] == field_idx)
> >> >> + return true;
> >> >> + return false;
> >> >> +}
> >> >> +
> >> >> +static void sort_secondary(struct tracing_map *map,
> >> >> + const struct tracing_map_sort_entry **entries,
> >> >> + unsigned int n_entries,
> >> >> + struct tracing_map_sort_key *primary_key,
> >> >> + struct tracing_map_sort_key *secondary_key)
> >> >> +{
> >> >> + int (*primary_fn)(const struct tracing_map_sort_entry **,
> >> >> + const struct tracing_map_sort_entry **);
> >> >> + int (*secondary_fn)(const struct tracing_map_sort_entry **,
> >> >> + const struct tracing_map_sort_entry **);
> >> >> + unsigned i, start = 0, n_sub = 1;
> >> >> +
> >> >> + if (is_key(map, primary_key->field_idx))
> >> >> + primary_fn = cmp_entries_key;
> >> >> + else
> >> >> + primary_fn = cmp_entries_sum;
> >> >> +
> >> >> + if (is_key(map, secondary_key->field_idx))
> >> >> + secondary_fn = cmp_entries_key;
> >> >> + else
> >> >> + secondary_fn = cmp_entries_sum;
> >> >> +
> >> >> + for (i = 0; i < n_entries - 1; i++) {
> >> >> + const struct tracing_map_sort_entry **a = &entries[i];
> >> >> + const struct tracing_map_sort_entry **b = &entries[i + 1];
> >> >> +
> >> >> + if (primary_fn(a, b) == 0) {
> >> >> + n_sub++;
> >> >> + if (i < n_entries - 2)
> >> >> + continue;
> >> >> + }
> >> >> +
> >> >> + if (n_sub < 2) {
> >> >> + start = i + 1;
> >> >> + n_sub = 1;
> >> >> + continue;
> >> >> + }
> >> >> +
> >> >> + set_sort_key(map, secondary_key);
> >> >> + sort(&entries[start], n_sub,
> >> >> + sizeof(struct tracing_map_sort_entry *),
> >> >> + (int (*)(const void *, const void *))secondary_fn, NULL);
> >> >> + set_sort_key(map, primary_key);
> >> >> +
> >> >> + start = i + 1;
> >> >> + n_sub = 1;
> >> >> + }
> >> >> +}
> >> >> +
> >> >> +/**
> >> >> + * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
> >> >> + * @map: The tracing_map
> >> >> + * @sort_key: The sort key to use for sorting
> >> >> + * @sort_entries: outval: pointer to allocated and sorted array of entries
> >> >> + *
> >> >> + * tracing_map_sort_entries() sorts the current set of entries in the
> >> >> + * map and returns the list of tracing_map_sort_entries containing
> >> >> + * them to the client in the sort_entries param. The client can
> >> >> + * access the struct tracing_map_elt element of interest directly as
> >> >> + * the 'elt' field of a returned struct tracing_map_sort_entry object.
> >> >> + *
> >> >> + * The sort_key has only two fields: idx and descending. 'idx' refers
> >> >> + * to the index of the field added via tracing_map_add_sum_field() or
> >> >> + * tracing_map_add_key_field() when the tracing_map was initialized.
> >> >> + * 'descending' is a flag that if set reverses the sort order, which
> >> >> + * by default is ascending.
> >> >> + *
> >> >> + * The client should not hold on to the returned array but should use
> >> >> + * it and call tracing_map_destroy_sort_entries() when done.
> >> >> + *
> >> >> + * Return: the number of sort_entries in the struct tracing_map_sort_entry
> >> >> + * array, negative on error
> >> >> + */
> >> >> +int tracing_map_sort_entries(struct tracing_map *map,
> >> >> + struct tracing_map_sort_key *sort_keys,
> >> >> + unsigned int n_sort_keys,
> >> >> + struct tracing_map_sort_entry ***sort_entries)
> >> >> +{
> >> >> + int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
> >> >> + const struct tracing_map_sort_entry **);
> >> >> + struct tracing_map_sort_entry *sort_entry, **entries;
> >> >> + int i, n_entries, ret;
> >> >> +
> >> >> + entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
> >> >> + if (!entries)
> >> >> + return -ENOMEM;
> >> >> +
> >> >> + for (i = 0, n_entries = 0; i < map->map_size; i++) {
> >> >> + if (!map->map[i].key || !map->map[i].val)
> >> >> + continue;
> >> >> +
> >> >> + entries[n_entries] = create_sort_entry(map->map[i].val->key,
> >> >> + map->map[i].val);
> >> >> + if (!entries[n_entries++]) {
> >> >> + ret = -ENOMEM;
> >> >> + goto free;
> >> >> + }
> >> >> + }
> >> >> +
> >> >> + if (n_entries == 0) {
> >> >> + ret = 0;
> >> >> + goto free;
> >> >> + }
> >> >> +
> >> >> + if (n_entries == 1) {
> >> >> + *sort_entries = entries;
> >> >> + return 1;
> >> >> + }
> >> >> +
> >> >> + ret = merge_dups(entries, n_entries, map->key_size);
> >> >> + if (ret < 0)
> >> >> + goto free;
> >> >> + n_entries -= ret;
> >> >> +
> >> >> + if (is_key(map, sort_keys[0].field_idx))
> >> >> + cmp_entries_fn = cmp_entries_key;
> >> >> + else
> >> >> + cmp_entries_fn = cmp_entries_sum;
> >> >> +
> >> >> + set_sort_key(map, &sort_keys[0]);
> >> >> +
> >> >> + sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
> >> >> + (int (*)(const void *, const void *))cmp_entries_fn, NULL);
> >> >> +
> >> >> + if (n_sort_keys > 1)
> >> >> + sort_secondary(map,
> >> >> + (const struct tracing_map_sort_entry **)entries,
> >> >> + n_entries,
> >> >> + &sort_keys[0],
> >> >> + &sort_keys[1]);
> >> >> +
> >> >> + *sort_entries = entries;
> >> >> +
> >> >> + return n_entries;
> >> >> + free:
> >> >> + tracing_map_destroy_sort_entries(entries, n_entries);
> >> >> +
> >> >> + return ret;
> >> >> +}
> >> >> diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
> >> >> new file mode 100644
> >> >> index 0000000..2e63c5c
> >> >> --- /dev/null
> >> >> +++ b/kernel/trace/tracing_map.h
> >> >> @@ -0,0 +1,258 @@
> >> >> +#ifndef __TRACING_MAP_H
> >> >> +#define __TRACING_MAP_H
> >> >> +
> >> >> +#define TRACING_MAP_BITS_DEFAULT 11
> >> >> +#define TRACING_MAP_BITS_MAX 17
> >> >> +#define TRACING_MAP_BITS_MIN 7
> >> >> +
> >> >> +#define TRACING_MAP_FIELDS_MAX 4
> >> >> +#define TRACING_MAP_KEYS_MAX 2
> >> >> +
> >> >> +#define TRACING_MAP_SORT_KEYS_MAX 2
> >> >> +
> >> >> +typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
> >> >> +
> >> >> +/*
> >> >> + * This is an overview of the tracing_map data structures and how they
> >> >> + * relate to the tracing_map API. The details of the algorithms
> >> >> + * aren't discussed here - this is just a general overview of the data
> >> >> + * structures and how they interact with the API.
> >> >> + *
> >> >> + * The central data structure of the tracing_map is an initially
> >> >> + * zeroed array of struct tracing_map_entry (stored in the map field
> >> >> + * of struct tracing_map). tracing_map_entry is a very simple data
> >> >> + * structure containing only two fields: a 32-bit unsigned 'key'
> >> >> + * variable and a pointer named 'val'. This array of struct
> >> >> + * tracing_map_entry is essentially a hash table which will be
> >> >> + * modified by a single function, tracing_map_insert(), but which can
> >> >> + * be traversed and read by a user at any time (though the user does
> >> >> + * this indirectly via an array of tracing_map_sort_entry - see the
> >> >> + * explanation of that data structure in the discussion of the
> >> >> + * sorting-related data structures below).
> >> >> + *
> >> >> + * The central function of the tracing_map API is
> >> >> + * tracing_map_insert(). tracing_map_insert() hashes the
> >> >> + * arbitrarily-sized key passed into it into a 32-bit unsigned key.
> >> >> + * It then uses this key, truncated to the array size, as an index
> >> >> + * into the array of tracing_map_entries. If the value of the 'key'
> >> >> + * field of the tracing_map_entry found at that location is 0, then
> >> >> + * that entry is considered to be free and can be claimed, by
> >> >> + * replacing the 0 in the 'key' field of the tracing_map_entry with
> >> >> + * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
> >> >> + * 'val' field is then used to store a unique element which will be
> >> >> + * forever associated with that 32-bit hashed key in the
> >> >> + * tracing_map_entry.
> >> >> + *
> >> >> + * That unique element now in the tracing_map_entry's 'val' field is
> >> >> + * an instance of tracing_map_elt, where 'elt' in the latter part of
> >> >> + * that variable name is short for 'element'. The purpose of a
> >> >> + * tracing_map_elt is to hold values specific to the the particular
> >> >> + * 32-bit hashed key it's assocated with. Things such as the unique
> >> >> + * set of aggregated sums associated with the 32-bit hashed key, along
> >> >> + * with a copy of the full key associated with the entry, and which
> >> >> + * was used to produce the 32-bit hashed key.
> >> >> + *
> >> >> + * When tracing_map_create() is called to create the tracing map, the
> >> >> + * user specifies (indirectly via the map_bits param, the details are
> >> >> + * unimportant for this discussion) the maximum number of elements
> >> >> + * that the map can hold (stored in the max_elts field of struct
> >> >> + * tracing_map). This is the maximum possible number of
> >> >> + * tracing_map_entries in the tracing_map_entry array which can be
> >> >> + * 'claimed' as described in the above discussion, and therefore is
> >> >> + * also the maximum number of tracing_map_elts that can be associated
> >> >> + * with the tracing_map_entry array in the tracing_map. Because of
> >> >> + * the way the insertion algorithm works, the size of the allocated
> >> >> + * tracing_map_entry array is always twice the maximum number of
> >> >> + * elements (2 * max_elts). This value is stored in the map_size
> >> >> + * field of struct tracing_map.
> >> >> + *
> >> >> + * Because tracing_map_insert() needs to work from any context,
> >> >> + * including from within the memory allocation functions themselves,
> >> >> + * both the tracing_map_entry array and a pool of max_elts
> >> >> + * tracing_map_elts are pre-allocated before any call is made to
> >> >> + * tracing_map_insert().
> >> >> + *
> >> >> + * The tracing_map_entry array is allocated as a single block by
> >> >> + * tracing_map_create().
> >> >> + *
> >> >> + * Because the tracing_map_elts are much larger objects and can't
> >> >> + * generally be allocated together as a single large array without
> >> >> + * failure, they're allocated individually, by tracing_map_init().
> >> >> + *
> >> >> + * The pool of tracing_map_elts are allocated by tracing_map_init()
> >> >> + * rather than by tracing_map_create() because at the time
> >> >> + * tracing_map_create() is called, there isn't enough information to
> >> >> + * create the tracing_map_elts. Specifically,the user first needs to
> >> >> + * tell the tracing_map implementation how many fields the
> >> >> + * tracing_map_elts contain, and which types of fields they are (key
> >> >> + * or sum). The user does this via the tracing_map_add_sum_field()
> >> >> + * and tracing_map_add_key_field() functions, following which the user
> >> >> + * calls tracing_map_init() to finish up the tracing map setup. The
> >> >> + * array holding the pointers which make up the pre-allocated pool of
> >> >> + * tracing_map_elts is allocated as a single block and is stored in
> >> >> + * the elts field of struct tracing_map.
> >> >> + *
> >> >> + * There is also a set of structures used for sorting that might
> >> >> + * benefit from some minimal explanation.
> >> >> + *
> >> >> + * struct tracing_map_sort_key is used to drive the sort at any given
> >> >> + * time. By 'any given time' we mean that a different
> >> >> + * tracing_map_sort_key will be used at different times depending on
> >> >> + * whether the sort currently being performed is a primary or a
> >> >> + * secondary sort.
> >> >> + *
> >> >> + * The sort key is very simple, consisting of the field index of the
> >> >> + * tracing_map_elt field to sort on (which the user saved when adding
> >> >> + * the field), and whether the sort should be done in an ascending or
> >> >> + * descending order.
> >> >> + *
> >> >> + * For the convenience of the sorting code, a tracing_map_sort_entry
> >> >> + * is created for each tracing_map_elt, again individually allocated
> >> >> + * to avoid failures that might be expected if allocated as a single
> >> >> + * large array of struct tracing_map_sort_entry.
> >> >> + * tracing_map_sort_entry instances are the objects expected by the
> >> >> + * various internal sorting functions, and are also what the user
> >> >> + * ultimately receives after calling tracing_map_sort_entries().
> >> >> + * Because it doesn't make sense for users to access an unordered and
> >> >> + * sparsely populated tracing_map directly, the
> >> >> + * tracing_map_sort_entries() function is provided so that users can
> >> >> + * retrieve a sorted list of all existing elements. In addition to
> >> >> + * the associated tracing_map_elt 'elt' field contained within the
> >> >> + * tracing_map_sort_entry, which is the object of interest to the
> >> >> + * user, tracing_map_sort_entry objects contain a number of additional
> >> >> + * fields which are used for caching and internal purposes and can
> >> >> + * safely be ignored.
> >> >> +*/
> >> >> +
> >> >> +struct tracing_map_field {
> >> >> + tracing_map_cmp_fn_t cmp_fn;
> >> >> + union {
> >> >> + atomic64_t sum;
> >> >> + unsigned int offset;
> >> >> + };
> >> >> +};
> >> >> +
> >> >> +struct tracing_map_elt {
> >> >> + struct tracing_map *map;
> >> >> + struct tracing_map_field *fields;
> >> >> + void *key;
> >> >> + void *private_data;
> >> >> +};
> >> >> +
> >> >> +struct tracing_map_entry {
> >> >> + u32 key;
> >> >> + struct tracing_map_elt *val;
> >> >> +};
> >> >> +
> >> >> +struct tracing_map_sort_key {
> >> >> + unsigned int field_idx;
> >> >> + bool descending;
> >> >> +};
> >> >> +
> >> >> +struct tracing_map_sort_entry {
> >> >> + void *key;
> >> >> + struct tracing_map_elt *elt;
> >> >> + bool elt_copied;
> >> >> + bool dup;
> >> >> +};
> >> >> +
> >> >> +struct tracing_map {
> >> >> + unsigned int key_size;
> >> >> + unsigned int map_bits;
> >> >> + unsigned int map_size;
> >> >> + unsigned int max_elts;
> >> >> + atomic_t next_elt;
> >> >> + struct tracing_map_elt **elts;
> >> >> + struct tracing_map_entry *map;
> >> >> + struct tracing_map_ops *ops;
> >> >> + void *private_data;
> >> >> + struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
> >> >> + unsigned int n_fields;
> >> >> + int key_idx[TRACING_MAP_KEYS_MAX];
> >> >> + unsigned int n_keys;
> >> >> + struct tracing_map_sort_key sort_key;
> >> >> +};
> >> >> +
> >> >> +/**
> >> >> + * struct tracing_map_ops - callbacks for tracing_map
> >> >> + *
> >> >> + * The methods in this structure define callback functions for various
> >> >> + * operations on a tracing_map or objects related to a tracing_map.
> >> >> + *
> >> >> + * For a detailed description of tracing_map_elt objects please see
> >> >> + * the overview of tracing_map data structures at the beginning of
> >> >> + * this file.
> >> >> + *
> >> >> + * All the methods below are optional.
> >> >> + *
> >> >> + * @elt_alloc: When a tracing_map_elt is allocated, this function, if
> >> >> + * defined, will be called and gives clients the opportunity to
> >> >> + * allocate additional data and attach it to the element
> >> >> + * (tracing_map_elt->private_data is meant for that purpose).
> >> >> + * Element allocation occurs before tracing begins, when the
> >> >> + * tracing_map_init() call is made by client code.
> >> >> + *
> >> >> + * @elt_copy: At certain points in the lifetime of an element, it may
> >> >> + * need to be copied. The copy should include a copy of the
> >> >> + * client-allocated data, which can be copied into the 'to'
> >> >> + * element from the 'from' element.
> >> >> + *
> >> >> + * @elt_free: When a tracing_map_elt is freed, this function is called
> >> >> + * and allows client-allocated per-element data to be freed.
> >> >> + *
> >> >> + * @elt_clear: This callback allows per-element client-defined data to
> >> >> + * be cleared, if applicable.
> >> >> + *
> >> >> + * @elt_init: This callback allows per-element client-defined data to
> >> >> + * be initialized when used i.e. when the element is actually
> >> >> + * claimed by tracing_map_insert() in the context of the map
> >> >> + * insertion.
> >> >> + */
> >> >> +struct tracing_map_ops {
> >> >> + int (*elt_alloc)(struct tracing_map_elt *elt);
> >> >> + void (*elt_copy)(struct tracing_map_elt *to,
> >> >> + struct tracing_map_elt *from);
> >> >> + void (*elt_free)(struct tracing_map_elt *elt);
> >> >> + void (*elt_clear)(struct tracing_map_elt *elt);
> >> >> + void (*elt_init)(struct tracing_map_elt *elt);
> >> >> +};
> >> >> +
> >> >> +extern struct tracing_map *tracing_map_create(unsigned int map_bits,
> >> >> + unsigned int key_size,
> >> >> + struct tracing_map_ops *ops,
> >> >> + void *private_data);
> >> >> +extern int tracing_map_init(struct tracing_map *map);
> >> >> +
> >> >> +extern int tracing_map_add_sum_field(struct tracing_map *map);
> >> >> +extern int tracing_map_add_key_field(struct tracing_map *map,
> >> >> + unsigned int offset,
> >> >> + tracing_map_cmp_fn_t cmp_fn);
> >> >> +
> >> >> +extern void tracing_map_destroy(struct tracing_map *map);
> >> >> +extern void tracing_map_clear(struct tracing_map *map);
> >> >> +
> >> >> +extern struct tracing_map_elt *
> >> >> +tracing_map_insert(struct tracing_map *map, void *key);
> >> >> +
> >> >> +extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
> >> >> + int field_is_signed);
> >> >> +extern int tracing_map_cmp_string(void *val_a, void *val_b);
> >> >> +extern int tracing_map_cmp_none(void *val_a, void *val_b);
> >> >> +
> >> >> +extern void tracing_map_update_sum(struct tracing_map_elt *elt,
> >> >> + unsigned int i, u64 n);
> >> >> +extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
> >> >> +extern void tracing_map_set_field_descr(struct tracing_map *map,
> >> >> + unsigned int i,
> >> >> + unsigned int key_offset,
> >> >> + tracing_map_cmp_fn_t cmp_fn);
> >> >> +extern int
> >> >> +tracing_map_sort_entries(struct tracing_map *map,
> >> >> + struct tracing_map_sort_key *sort_keys,
> >> >> + unsigned int n_sort_keys,
> >> >> + struct tracing_map_sort_entry ***sort_entries);
> >> >> +
> >> >> +extern void
> >> >> +tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
> >> >> + unsigned int n_entries);
> >> >> +#endif /* __TRACING_MAP_H */
> >> >> --
> >> >> 1.9.3
> >> >>
> >> >> --
> >> >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> >> >> the body of a message to [email protected]
> >> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >> >> Please read the FAQ at http://www.tux.org/lkml/
> >> >
> >> > --
> >> > Mathieu Desnoyers
> >> > EfficiOS Inc.
> >> > http://www.efficios.com
>

2015-07-18 02:40:29

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

----- On Jul 17, 2015, at 7:44 PM, Tom Zanussi [email protected] wrote:

> On Fri, 2015-07-17 at 15:48 +0000, Mathieu Desnoyers wrote:
>> ----- On Jul 16, 2015, at 9:35 PM, Tom Zanussi [email protected]
>> wrote:
>>
>> > Hi Mathieu,
>> >
>> > On Thu, 2015-07-16 at 23:25 +0000, Mathieu Desnoyers wrote:
>> >> * Tom Zanussi wrote:
>> >> >> Add tracing_map, a special-purpose lock-free map for tracing.
>> >> >>
>> >> >> tracing_map is designed to aggregate or 'sum' one or more values
>> >> >> associated with a specific object of type tracing_map_elt, which
>> >> >> is associated by the map to a given key.
>> >> >>
>> >> >> It provides various hooks allowing per-tracer customization and is
>> >> >> separated out into a separate file in order to allow it to be shared
>> >> >> between multiple tracers, but isn't meant to be generally used outside
>> >> >> of that context.
>> >> >>
>> >> >> The tracing_map implementation was inspired by lock-free map
>> >> >> algorithms originated by Dr. Cliff Click:
>> >> >>
>> >> >> http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
>> >> >> http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
>> >>
>> >> Hi Tom,
>> >>
>> >> First question: what is the rationale for implementing another
>> >> hash table from scratch here ? What is missing in the pre-existing
>> >> hash table implementations ?
>> >>
>> >
>> > None of the other hash tables allow for lock-free insertion (and I
>> > didn't see an easy way to add it).
>>
>> This is one of the nice things about the Userspace RCU lock-free hash
>> table we've done a few years ago: it provides lock-free add, add_unique,
>> removal, and replace, as well as RCU wait-free lookups and traversals.
>> Resize can be done concurrently by a worker thread. I ported it to the
>> Linux kernel for Julien's work on latency tracker. You can find the
>> implementation here: https://github.com/jdesfossez/latency_tracker
>> (see rculfhash*)
>> It is a simplified version that has the "resize" feature removed for
>> simplicity sake. The "insert and lookup" feature you need is called
>> "add_unique" in our API: it behaves both as a lookup and as an atomic
>> insert if the key is not found.
>>
>
> Interesting, but it's just as much not upstream as mine is. ;-)

Fair point, although the userspace RCU lock-free hash table has been
heavily tested and used in user-space in the context of routing tables
(I remember Stephen Hemminger uses it at Vyatta). So being in userspace
RCU library got it some exposure and use over years.

>
> From the perspective of the hist triggers, it doesn't matter what hash
> table implementation it uses as long as whatever it is supports
> insertion in any context. In fact the current tracing_map
> implementation is already the second completely different implementation
> it's plugged into (see v2 of this patchset for the first). If yours is
> better and going upstream, I'd be happy to make it the third and forget
> about mine.

I'm a big fan of awaiting that people show a need before proposing
something for Linux mainline. Having a lock-free hash table for the
sake of tracing triggers appears to be the perfect use-case. If you
think it can be useful to you, I can look into doing a proper port
to the Linux kernel.

Thanks,

Mathieu

>
> Tom
>
>> Thanks,
>>
>> Mathieu
>>
>> >
>> >> Moreover, you might want to handle the case where jhash() returns
>> >> 0. AFAIU, there is a race on "insert" in this scenario.
>> >>
>> >
>> > You're right, in that case you'd accidentally overwrite an already
>> > claimed slot. Thanks for pointing that out.
>> >
>> > Tom
>> >
>> >> Thanks,
>> >>
>> >> Mathieu
>> >>
>> >> >>
>> >> >> Signed-off-by: Tom Zanussi <[email protected]>
>> >> >> ---
>> >> >> kernel/trace/Makefile | 1 +
>> >> >> kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
>> >> >> kernel/trace/tracing_map.h | 258 +++++++++++++
>> >> >> 3 files changed, 1194 insertions(+)
>> >> >> create mode 100644 kernel/trace/tracing_map.c
>> >> >> create mode 100644 kernel/trace/tracing_map.h
>> >> >>
>> >> >> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
>> >> >> index 9b1044e..3b26cfb 100644
>> >> >> --- a/kernel/trace/Makefile
>> >> >> +++ b/kernel/trace/Makefile
>> >> >> @@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
>> >> >> obj-$(CONFIG_TRACING) += trace_seq.o
>> >> >> obj-$(CONFIG_TRACING) += trace_stat.o
>> >> >> obj-$(CONFIG_TRACING) += trace_printk.o
>> >> >> +obj-$(CONFIG_TRACING) += tracing_map.o
>> >> >> obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
>> >> >> obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
>> >> >> obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
>> >> >> diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
>> >> >> new file mode 100644
>> >> >> index 0000000..a505025
>> >> >> --- /dev/null
>> >> >> +++ b/kernel/trace/tracing_map.c
>> >> >> @@ -0,0 +1,935 @@
>> >> >> +/*
>> >> >> + * tracing_map - lock-free map for tracing
>> >> >> + *
>> >> >> + * This program is free software; you can redistribute it and/or modify
>> >> >> + * it under the terms of the GNU General Public License as published by
>> >> >> + * the Free Software Foundation; either version 2 of the License, or
>> >> >> + * (at your option) any later version.
>> >> >> + *
>> >> >> + * This program is distributed in the hope that it will be useful,
>> >> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> >> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> >> >> + * GNU General Public License for more details.
>> >> >> + *
>> >> >> + * Copyright (C) 2015 Tom Zanussi <[email protected]>
>> >> >> + *
>> >> >> + * tracing_map implementation inspired by lock-free map algorithms
>> >> >> + * originated by Dr. Cliff Click:
>> >> >> + *
>> >> >> + * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
>> >> >> + * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
>> >> >> + */
>> >> >> +
>> >> >> +#include <linux/slab.h>
>> >> >> +#include <linux/jhash.h>
>> >> >> +#include <linux/sort.h>
>> >> >> +
>> >> >> +#include "tracing_map.h"
>> >> >> +#include "trace.h"
>> >> >> +
>> >> >> +/*
>> >> >> + * NOTE: For a detailed description of the data structures used by
>> >> >> + * these functions (such as tracing_map_elt) please see the overview
>> >> >> + * of tracing_map data structures at the beginning of tracing_map.h.
>> >> >> + */
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
>> >> >> + * @elt: The tracing_map_elt
>> >> >> + * @i: The index of the given sum associated with the tracing_map_elt
>> >> >> + * @n: The value to add to the sum
>> >> >> + *
>> >> >> + * Add n to sum i associated with the specified tracing_map_elt
>> >> >> + * instance. The index i is the index returned by the call to
>> >> >> + * tracing_map_add_sum_field() when the tracing map was set up.
>> >> >> + */
>> >> >> +void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
>> >> >> +{
>> >> >> + atomic64_add(n, &elt->fields[i].sum);
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
>> >> >> + * @elt: The tracing_map_elt
>> >> >> + * @i: The index of the given sum associated with the tracing_map_elt
>> >> >> + *
>> >> >> + * Retrieve the value of the sum i associated with the specified
>> >> >> + * tracing_map_elt instance. The index i is the index returned by the
>> >> >> + * call to tracing_map_add_sum_field() when the tracing map was set
>> >> >> + * up.
>> >> >> + *
>> >> >> + * Return: The sum associated with field i for elt.
>> >> >> + */
>> >> >> +u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
>> >> >> +{
>> >> >> + return (u64)atomic64_read(&elt->fields[i].sum);
>> >> >> +}
>> >> >> +
>> >> >> +int tracing_map_cmp_string(void *val_a, void *val_b)
>> >> >> +{
>> >> >> + char *a = val_a;
>> >> >> + char *b = val_b;
>> >> >> +
>> >> >> + return strcmp(a, b);
>> >> >> +}
>> >> >> +
>> >> >> +int tracing_map_cmp_none(void *val_a, void *val_b)
>> >> >> +{
>> >> >> + return 0;
>> >> >> +}
>> >> >> +
>> >> >> +static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
>> >> >> +{
>> >> >> + u64 a = atomic64_read((atomic64_t *)val_a);
>> >> >> + u64 b = atomic64_read((atomic64_t *)val_b);
>> >> >> +
>> >> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0);
>> >> >> +}
>> >> >> +
>> >> >> +#define DEFINE_TRACING_MAP_CMP_FN(type) \
>> >> >> +static int tracing_map_cmp_##type(void *val_a, void *val_b) \
>> >> >> +{ \
>> >> >> + type a = *(type *)val_a; \
>> >> >> + type b = *(type *)val_b; \
>> >> >> + \
>> >> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0); \
>> >> >> +}
>> >> >> +
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(s64);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(u64);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(s32);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(u32);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(s16);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(u16);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(s8);
>> >> >> +DEFINE_TRACING_MAP_CMP_FN(u8);
>> >> >> +
>> >> >> +tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
>> >> >> + int field_is_signed)
>> >> >> +{
>> >> >> + tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
>> >> >> +
>> >> >> + switch (field_size) {
>> >> >> + case 8:
>> >> >> + if (field_is_signed)
>> >> >> + fn = tracing_map_cmp_s64;
>> >> >> + else
>> >> >> + fn = tracing_map_cmp_u64;
>> >> >> + break;
>> >> >> + case 4:
>> >> >> + if (field_is_signed)
>> >> >> + fn = tracing_map_cmp_s32;
>> >> >> + else
>> >> >> + fn = tracing_map_cmp_u32;
>> >> >> + break;
>> >> >> + case 2:
>> >> >> + if (field_is_signed)
>> >> >> + fn = tracing_map_cmp_s16;
>> >> >> + else
>> >> >> + fn = tracing_map_cmp_u16;
>> >> >> + break;
>> >> >> + case 1:
>> >> >> + if (field_is_signed)
>> >> >> + fn = tracing_map_cmp_s8;
>> >> >> + else
>> >> >> + fn = tracing_map_cmp_u8;
>> >> >> + break;
>> >> >> + }
>> >> >> +
>> >> >> + return fn;
>> >> >> +}
>> >> >> +
>> >> >> +static int tracing_map_add_field(struct tracing_map *map,
>> >> >> + tracing_map_cmp_fn_t cmp_fn)
>> >> >> +{
>> >> >> + int ret = -EINVAL;
>> >> >> +
>> >> >> + if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
>> >> >> + ret = map->n_fields;
>> >> >> + map->fields[map->n_fields++].cmp_fn = cmp_fn;
>> >> >> + }
>> >> >> +
>> >> >> + return ret;
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_add_sum_field - Add a field describing a tracing_map sum
>> >> >> + * @map: The tracing_map
>> >> >> + *
>> >> >> + * Add a sum field to the key and return the index identifying it in
>> >> >> + * the map and associated tracing_map_elts. This is the index used
>> >> >> + * for instance to update a sum for a particular tracing_map_elt using
>> >> >> + * tracing_map_update_sum() or reading it via tracing_map_read_sum().
>> >> >> + *
>> >> >> + * Return: The index identifying the field in the map and associated
>> >> >> + * tracing_map_elts.
>> >> >> + */
>> >> >> +int tracing_map_add_sum_field(struct tracing_map *map)
>> >> >> +{
>> >> >> + return tracing_map_add_field(map, tracing_map_cmp_atomic64);
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_add_key_field - Add a field describing a tracing_map key
>> >> >> + * @map: The tracing_map
>> >> >> + * @offset: The offset within the key
>> >> >> + * @cmp_fn: The comparison function that will be used to sort on the key
>> >> >> + *
>> >> >> + * Let the map know there is a key and that if it's used as a sort key
>> >> >> + * to use cmp_fn.
>> >> >> + *
>> >> >> + * A key can be a subset of a compound key; for that purpose, the
>> >> >> + * offset param is used to describe where within the the compound key
>> >> >> + * the key referenced by this key field resides.
>> >> >> + *
>> >> >> + * Return: The index identifying the field in the map and associated
>> >> >> + * tracing_map_elts.
>> >> >> + */
>> >> >> +int tracing_map_add_key_field(struct tracing_map *map,
>> >> >> + unsigned int offset,
>> >> >> + tracing_map_cmp_fn_t cmp_fn)
>> >> >> +
>> >> >> +{
>> >> >> + int idx = tracing_map_add_field(map, cmp_fn);
>> >> >> +
>> >> >> + if (idx < 0)
>> >> >> + return idx;
>> >> >> +
>> >> >> + map->fields[idx].offset = offset;
>> >> >> +
>> >> >> + map->key_idx[map->n_keys++] = idx;
>> >> >> +
>> >> >> + return idx;
>> >> >> +}
>> >> >> +
>> >> >> +static void tracing_map_elt_clear(struct tracing_map_elt *elt)
>> >> >> +{
>> >> >> + unsigned i;
>> >> >> +
>> >> >> + for (i = 0; i < elt->map->n_fields; i++)
>> >> >> + if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
>> >> >> + atomic64_set(&elt->fields[i].sum, 0);
>> >> >> +
>> >> >> + if (elt->map->ops && elt->map->ops->elt_clear)
>> >> >> + elt->map->ops->elt_clear(elt);
>> >> >> +}
>> >> >> +
>> >> >> +static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
>> >> >> +{
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + tracing_map_elt_clear(elt);
>> >> >> +
>> >> >> + for (i = 0; i < elt->map->n_fields; i++) {
>> >> >> + elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
>> >> >> +
>> >> >> + if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
>> >> >> + elt->fields[i].offset = elt->map->fields[i].offset;
>> >> >> + }
>> >> >> +}
>> >> >> +
>> >> >> +static void tracing_map_elt_free(struct tracing_map_elt *elt)
>> >> >> +{
>> >> >> + if (!elt)
>> >> >> + return;
>> >> >> +
>> >> >> + if (elt->map->ops && elt->map->ops->elt_free)
>> >> >> + elt->map->ops->elt_free(elt);
>> >> >> + kfree(elt->fields);
>> >> >> + kfree(elt->key);
>> >> >> + kfree(elt);
>> >> >> +}
>> >> >> +
>> >> >> +static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
>> >> >> +{
>> >> >> + struct tracing_map_elt *elt;
>> >> >> + int err = 0;
>> >> >> +
>> >> >> + elt = kzalloc(sizeof(*elt), GFP_KERNEL);
>> >> >> + if (!elt)
>> >> >> + return ERR_PTR(-ENOMEM);
>> >> >> +
>> >> >> + elt->map = map;
>> >> >> +
>> >> >> + elt->key = kzalloc(map->key_size, GFP_KERNEL);
>> >> >> + if (!elt->key) {
>> >> >> + err = -ENOMEM;
>> >> >> + goto free;
>> >> >> + }
>> >> >> +
>> >> >> + elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
>> >> >> + if (!elt->fields) {
>> >> >> + err = -ENOMEM;
>> >> >> + goto free;
>> >> >> + }
>> >> >> +
>> >> >> + tracing_map_elt_init_fields(elt);
>> >> >> +
>> >> >> + if (map->ops && map->ops->elt_alloc) {
>> >> >> + err = map->ops->elt_alloc(elt);
>> >> >> + if (err)
>> >> >> + goto free;
>> >> >> + }
>> >> >> + return elt;
>> >> >> + free:
>> >> >> + tracing_map_elt_free(elt);
>> >> >> +
>> >> >> + return ERR_PTR(err);
>> >> >> +}
>> >> >> +
>> >> >> +static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
>> >> >> +{
>> >> >> + struct tracing_map_elt *elt = NULL;
>> >> >> + int idx;
>> >> >> +
>> >> >> + idx = atomic_inc_return(&map->next_elt);
>> >> >> + if (idx < map->max_elts) {
>> >> >> + elt = map->elts[idx];
>> >> >> + if (map->ops && map->ops->elt_init)
>> >> >> + map->ops->elt_init(elt);
>> >> >> + }
>> >> >> +
>> >> >> + return elt;
>> >> >> +}
>> >> >> +
>> >> >> +static void tracing_map_free_elts(struct tracing_map *map)
>> >> >> +{
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + if (!map->elts)
>> >> >> + return;
>> >> >> +
>> >> >> + for (i = 0; i < map->max_elts; i++)
>> >> >> + tracing_map_elt_free(map->elts[i]);
>> >> >> +
>> >> >> + kfree(map->elts);
>> >> >> +}
>> >> >> +
>> >> >> +static int tracing_map_alloc_elts(struct tracing_map *map)
>> >> >> +{
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
>> >> >> + GFP_KERNEL);
>> >> >> + if (!map->elts)
>> >> >> + return -ENOMEM;
>> >> >> +
>> >> >> + for (i = 0; i < map->max_elts; i++) {
>> >> >> + map->elts[i] = tracing_map_elt_alloc(map);
>> >> >> + if (!map->elts[i]) {
>> >> >> + tracing_map_free_elts(map);
>> >> >> +
>> >> >> + return -ENOMEM;
>> >> >> + }
>> >> >> + }
>> >> >> +
>> >> >> + return 0;
>> >> >> +}
>> >> >> +
>> >> >> +static inline bool keys_match(void *key, void *test_key, unsigned key_size)
>> >> >> +{
>> >> >> + bool match = true;
>> >> >> +
>> >> >> + if (memcmp(key, test_key, key_size))
>> >> >> + match = false;
>> >> >> +
>> >> >> + return match;
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
>> >> >> + * @map: The tracing_map to insert into
>> >> >> + * @key: The key to insert
>> >> >> + *
>> >> >> + * Inserts a key into a tracing_map and creates and returns a new
>> >> >> + * tracing_map_elt for it, or if the key has already been inserted by
>> >> >> + * a previous call, returns the tracing_map_elt already associated
>> >> >> + * with it. When the map was created, the number of elements to be
>> >> >> + * allocated for the map was specified (internally maintained as
>> >> >> + * 'max_elts' in struct tracing_map), and that number of
>> >> >> + * tracing_map_elts was created by tracing_map_init(). This is the
>> >> >> + * pre-allocated pool of tracing_map_elts that tracing_map_insert()
>> >> >> + * will allocate from when adding new keys. Once that pool is
>> >> >> + * exhausted, tracing_map_insert() is useless and will return NULL to
>> >> >> + * signal that state.
>> >> >> + *
>> >> >> + * This is a lock-free tracing map insertion function implementing a
>> >> >> + * modified form of Cliff Click's basic insertion algorithm. It
>> >> >> + * requires the table size be a power of two. To prevent any
>> >> >> + * possibility of an infinite loop we always make the internal table
>> >> >> + * size double the size of the requested table size (max_elts * 2).
>> >> >> + * Likewise, we never reuse a slot or resize or delete elements - when
>> >> >> + * we've reached max_elts entries, we simply return NULL once we've
>> >> >> + * run out of entries. Readers can at any point in time traverse the
>> >> >> + * tracing map and safely access the key/val pairs.
>> >> >> + *
>> >> >> + * Return: the tracing_map_elt pointer val associated with the key.
>> >> >> + * If this was a newly inserted key, the val will be a newly allocated
>> >> >> + * and associated tracing_map_elt pointer val. If the key wasn't
>> >> >> + * found and the pool of tracing_map_elts has been exhausted, NULL is
>> >> >> + * returned and no further insertions will succeed.
>> >> >> + */
>> >> >> +struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
>> >> >> +{
>> >> >> + u32 idx, key_hash, test_key;
>> >> >> +
>> >> >> + key_hash = jhash(key, map->key_size, 0);
>> >> >> + idx = key_hash >> (32 - (map->map_bits + 1));
>> >> >> +
>> >> >> + while (1) {
>> >> >> + idx &= (map->map_size - 1);
>> >> >> + test_key = map->map[idx].key;
>> >> >> +
>> >> >> + if (test_key && test_key == key_hash && map->map[idx].val &&
>> >> >> + keys_match(key, map->map[idx].val->key, map->key_size))
>> >> >> + return map->map[idx].val;
>> >> >> +
>> >> >> + if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
>> >> >> + struct tracing_map_elt *elt;
>> >> >> +
>> >> >> + elt = get_free_elt(map);
>> >> >> + if (!elt)
>> >> >> + break;
>> >> >> + memcpy(elt->key, key, map->key_size);
>> >> >> + map->map[idx].val = elt;
>> >> >> +
>> >> >> + return map->map[idx].val;
>> >> >> + }
>> >> >> + idx++;
>> >> >> + }
>> >> >> +
>> >> >> + return NULL;
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_destroy - Destroy a tracing_map
>> >> >> + * @map: The tracing_map to destroy
>> >> >> + *
>> >> >> + * Frees a tracing_map along with its associated array of
>> >> >> + * tracing_map_elts.
>> >> >> + *
>> >> >> + * Callers should make sure there are no readers or writers actively
>> >> >> + * reading or inserting into the map before calling this.
>> >> >> + */
>> >> >> +void tracing_map_destroy(struct tracing_map *map)
>> >> >> +{
>> >> >> + if (!map)
>> >> >> + return;
>> >> >> +
>> >> >> + tracing_map_free_elts(map);
>> >> >> +
>> >> >> + kfree(map->map);
>> >> >> + kfree(map);
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_clear - Clear a tracing_map
>> >> >> + * @map: The tracing_map to clear
>> >> >> + *
>> >> >> + * Resets the tracing map to a cleared or initial state. The
>> >> >> + * tracing_map_elts are all cleared, and the array of struct
>> >> >> + * tracing_map_entry is reset to an initialized state.
>> >> >> + *
>> >> >> + * Callers should make sure there are no writers actively inserting
>> >> >> + * into the map before calling this.
>> >> >> + */
>> >> >> +void tracing_map_clear(struct tracing_map *map)
>> >> >> +{
>> >> >> + unsigned int i, size;
>> >> >> +
>> >> >> + atomic_set(&map->next_elt, -1);
>> >> >> +
>> >> >> + size = map->map_size * sizeof(struct tracing_map_entry);
>> >> >> + memset(map->map, 0, size);
>> >> >> +
>> >> >> + for (i = 0; i < map->max_elts; i++)
>> >> >> + tracing_map_elt_clear(map->elts[i]);
>> >> >> +}
>> >> >> +
>> >> >> +static void set_sort_key(struct tracing_map *map,
>> >> >> + struct tracing_map_sort_key *sort_key)
>> >> >> +{
>> >> >> + map->sort_key = *sort_key;
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_create - Create a lock-free map and element pool
>> >> >> + * @map_bits: The size of the map (2 ** map_bits)
>> >> >> + * @key_size: The size of the key for the map in bytes
>> >> >> + * @ops: Optional client-defined tracing_map_ops instance
>> >> >> + * @private_data: Client data associated with the map
>> >> >> + *
>> >> >> + * Creates and sets up a map to contain 2 ** map_bits number of
>> >> >> + * elements (internally maintained as 'max_elts' in struct
>> >> >> + * tracing_map). Before using, map fields should be added to the map
>> >> >> + * with tracing_map_add_sum_field() and tracing_map_add_key_field().
>> >> >> + * tracing_map_init() should then be called to allocate the array of
>> >> >> + * tracing_map_elts, in order to avoid allocating anything in the map
>> >> >> + * insertion path. The user-specified map size reflects the maximum
>> >> >> + * number of elements that can be contained in the table requested by
>> >> >> + * the user - internally we double that in order to keep the table
>> >> >> + * sparse and keep collisions manageable.
>> >> >> + *
>> >> >> + * A tracing_map is a special-purpose map designed to aggregate or
>> >> >> + * 'sum' one or more values associated with a specific object of type
>> >> >> + * tracing_map_elt, which is attached by the map to a given key.
>> >> >> + *
>> >> >> + * tracing_map_create() sets up the map itself, and provides
>> >> >> + * operations for inserting tracing_map_elts, but doesn't allocate the
>> >> >> + * tracing_map_elts themselves, or provide a means for describing the
>> >> >> + * keys or sums associated with the tracing_map_elts. All
>> >> >> + * tracing_map_elts for a given map have the same set of sums and
>> >> >> + * keys, which are defined by the client using the functions
>> >> >> + * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
>> >> >> + * the fields are defined, the pool of elements allocated for the map
>> >> >> + * can be created, which occurs when the client code calls
>> >> >> + * tracing_map_init().
>> >> >> + *
>> >> >> + * When tracing_map_init() returns, tracing_map_elt elements can be
>> >> >> + * inserted into the map using tracing_map_insert(). When called,
>> >> >> + * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
>> >> >> + * finds an existing match in the map and in either case returns it.
>> >> >> + * The client can then use tracing_map_update_sum() and
>> >> >> + * tracing_map_read_sum() to update or read a given sum field for the
>> >> >> + * tracing_map_elt.
>> >> >> + *
>> >> >> + * The client can at any point retrieve and traverse the current set
>> >> >> + * of inserted tracing_map_elts in a tracing_map, via
>> >> >> + * tracing_map_sort_entries(). Sorting can be done on any field,
>> >> >> + * including keys.
>> >> >> + *
>> >> >> + * See tracing_map.h for a description of tracing_map_ops.
>> >> >> + *
>> >> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
>> >> >> + */
>> >> >> +struct tracing_map *tracing_map_create(unsigned int map_bits,
>> >> >> + unsigned int key_size,
>> >> >> + struct tracing_map_ops *ops,
>> >> >> + void *private_data)
>> >> >> +{
>> >> >> + struct tracing_map *map;
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + if (map_bits < TRACING_MAP_BITS_MIN ||
>> >> >> + map_bits > TRACING_MAP_BITS_MAX)
>> >> >> + return ERR_PTR(-EINVAL);
>> >> >> +
>> >> >> + map = kzalloc(sizeof(*map), GFP_KERNEL);
>> >> >> + if (!map)
>> >> >> + return ERR_PTR(-ENOMEM);
>> >> >> +
>> >> >> + map->map_bits = map_bits;
>> >> >> + map->max_elts = (1 << map_bits);
>> >> >> + atomic_set(&map->next_elt, -1);
>> >> >> +
>> >> >> + map->map_size = (1 << (map_bits + 1));
>> >> >> + map->ops = ops;
>> >> >> +
>> >> >> + map->private_data = private_data;
>> >> >> +
>> >> >> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
>> >> >> + GFP_KERNEL);
>> >> >> + if (!map->map)
>> >> >> + goto free;
>> >> >> +
>> >> >> + map->key_size = key_size;
>> >> >> + for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
>> >> >> + map->key_idx[i] = -1;
>> >> >> + out:
>> >> >> + return map;
>> >> >> + free:
>> >> >> + tracing_map_destroy(map);
>> >> >> + map = ERR_PTR(-ENOMEM);
>> >> >> +
>> >> >> + goto out;
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_init - Allocate and clear a map's tracing_map_elts
>> >> >> + * @map: The tracing_map to initialize
>> >> >> + *
>> >> >> + * Allocates a clears a pool of tracing_map_elts equal to the
>> >> >> + * user-specified size of 2 ** map_bits (internally maintained as
>> >> >> + * 'max_elts' in struct tracing_map). Before using, the map fields
>> >> >> + * should be added to the map with tracing_map_add_sum_field() and
>> >> >> + * tracing_map_add_key_field(). tracing_map_init() should then be
>> >> >> + * called to allocate the array of tracing_map_elts, in order to avoid
>> >> >> + * allocating anything in the map insertion path. The user-specified
>> >> >> + * map size reflects the max number of elements requested by the user
>> >> >> + * - internally we double that in order to keep the table sparse and
>> >> >> + * keep collisions manageable.
>> >> >> + *
>> >> >> + * See tracing_map.h for a description of tracing_map_ops.
>> >> >> + *
>> >> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
>> >> >> + */
>> >> >> +int tracing_map_init(struct tracing_map *map)
>> >> >> +{
>> >> >> + int err;
>> >> >> +
>> >> >> + if (map->n_fields < 2)
>> >> >> + return -EINVAL; /* need at least 1 key and 1 val */
>> >> >> +
>> >> >> + err = tracing_map_alloc_elts(map);
>> >> >> + if (err)
>> >> >> + return err;
>> >> >> +
>> >> >> + tracing_map_clear(map);
>> >> >> +
>> >> >> + return err;
>> >> >> +}
>> >> >> +
>> >> >> +static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
>> >> >> + const struct tracing_map_sort_entry **b)
>> >> >> +{
>> >> >> + int ret = 0;
>> >> >> +
>> >> >> + if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
>> >> >> + ret = 1;
>> >> >> +
>> >> >> + return ret;
>> >> >> +}
>> >> >> +
>> >> >> +static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
>> >> >> + const struct tracing_map_sort_entry **b)
>> >> >> +{
>> >> >> + const struct tracing_map_elt *elt_a, *elt_b;
>> >> >> + struct tracing_map_sort_key *sort_key;
>> >> >> + struct tracing_map_field *field;
>> >> >> + tracing_map_cmp_fn_t cmp_fn;
>> >> >> + void *val_a, *val_b;
>> >> >> + int ret = 0;
>> >> >> +
>> >> >> + elt_a = (*a)->elt;
>> >> >> + elt_b = (*b)->elt;
>> >> >> +
>> >> >> + sort_key = &elt_a->map->sort_key;
>> >> >> +
>> >> >> + field = &elt_a->fields[sort_key->field_idx];
>> >> >> + cmp_fn = field->cmp_fn;
>> >> >> +
>> >> >> + val_a = &elt_a->fields[sort_key->field_idx].sum;
>> >> >> + val_b = &elt_b->fields[sort_key->field_idx].sum;
>> >> >> +
>> >> >> + ret = cmp_fn(val_a, val_b);
>> >> >> + if (sort_key->descending)
>> >> >> + ret = -ret;
>> >> >> +
>> >> >> + return ret;
>> >> >> +}
>> >> >> +
>> >> >> +static int cmp_entries_key(const struct tracing_map_sort_entry **a,
>> >> >> + const struct tracing_map_sort_entry **b)
>> >> >> +{
>> >> >> + const struct tracing_map_elt *elt_a, *elt_b;
>> >> >> + struct tracing_map_sort_key *sort_key;
>> >> >> + struct tracing_map_field *field;
>> >> >> + tracing_map_cmp_fn_t cmp_fn;
>> >> >> + void *val_a, *val_b;
>> >> >> + int ret = 0;
>> >> >> +
>> >> >> + elt_a = (*a)->elt;
>> >> >> + elt_b = (*b)->elt;
>> >> >> +
>> >> >> + sort_key = &elt_a->map->sort_key;
>> >> >> +
>> >> >> + field = &elt_a->fields[sort_key->field_idx];
>> >> >> +
>> >> >> + cmp_fn = field->cmp_fn;
>> >> >> +
>> >> >> + val_a = elt_a->key + field->offset;
>> >> >> + val_b = elt_b->key + field->offset;
>> >> >> +
>> >> >> + ret = cmp_fn(val_a, val_b);
>> >> >> + if (sort_key->descending)
>> >> >> + ret = -ret;
>> >> >> +
>> >> >> + return ret;
>> >> >> +}
>> >> >> +
>> >> >> +static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
>> >> >> +{
>> >> >> + if (!entry)
>> >> >> + return;
>> >> >> +
>> >> >> + if (entry->elt_copied)
>> >> >> + tracing_map_elt_free(entry->elt);
>> >> >> +
>> >> >> + kfree(entry);
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
>> >> >> + * @entries: The entries to destroy
>> >> >> + * @n_entries: The number of entries in the array
>> >> >> + *
>> >> >> + * Destroy the elements returned by a tracing_map_sort_entries() call.
>> >> >> + */
>> >> >> +void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
>> >> >> + unsigned int n_entries)
>> >> >> +{
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + for (i = 0; i < n_entries; i++)
>> >> >> + destroy_sort_entry(entries[i]);
>> >> >> +}
>> >> >> +
>> >> >> +static struct tracing_map_sort_entry *
>> >> >> +create_sort_entry(void *key, struct tracing_map_elt *elt)
>> >> >> +{
>> >> >> + struct tracing_map_sort_entry *sort_entry;
>> >> >> +
>> >> >> + sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
>> >> >> + if (!sort_entry)
>> >> >> + return NULL;
>> >> >> +
>> >> >> + sort_entry->key = key;
>> >> >> + sort_entry->elt = elt;
>> >> >> +
>> >> >> + return sort_entry;
>> >> >> +}
>> >> >> +
>> >> >> +static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
>> >> >> +{
>> >> >> + struct tracing_map_elt *dup_elt;
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + dup_elt = tracing_map_elt_alloc(elt->map);
>> >> >> + if (!dup_elt)
>> >> >> + return NULL;
>> >> >> +
>> >> >> + if (elt->map->ops && elt->map->ops->elt_copy)
>> >> >> + elt->map->ops->elt_copy(dup_elt, elt);
>> >> >> +
>> >> >> + dup_elt->private_data = elt->private_data;
>> >> >> + memcpy(dup_elt->key, elt->key, elt->map->key_size);
>> >> >> +
>> >> >> + for (i = 0; i < elt->map->n_fields; i++) {
>> >> >> + atomic64_set(&dup_elt->fields[i].sum,
>> >> >> + atomic64_read(&elt->fields[i].sum));
>> >> >> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
>> >> >> + }
>> >> >> +
>> >> >> + return dup_elt;
>> >> >> +}
>> >> >> +
>> >> >> +static int merge_dup(struct tracing_map_sort_entry **sort_entries,
>> >> >> + unsigned int target, unsigned int dup)
>> >> >> +{
>> >> >> + struct tracing_map_elt *target_elt, *elt;
>> >> >> + bool first_dup = (target - dup) == 1;
>> >> >> + int i;
>> >> >> +
>> >> >> + if (first_dup) {
>> >> >> + elt = sort_entries[target]->elt;
>> >> >> + target_elt = copy_elt(elt);
>> >> >> + if (!target_elt)
>> >> >> + return -ENOMEM;
>> >> >> + sort_entries[target]->elt = target_elt;
>> >> >> + sort_entries[target]->elt_copied = true;
>> >> >> + } else
>> >> >> + target_elt = sort_entries[target]->elt;
>> >> >> +
>> >> >> + elt = sort_entries[dup]->elt;
>> >> >> +
>> >> >> + for (i = 0; i < elt->map->n_fields; i++)
>> >> >> + atomic64_add(atomic64_read(&elt->fields[i].sum),
>> >> >> + &target_elt->fields[i].sum);
>> >> >> +
>> >> >> + sort_entries[dup]->dup = true;
>> >> >> +
>> >> >> + return 0;
>> >> >> +}
>> >> >> +
>> >> >> +static int merge_dups(struct tracing_map_sort_entry **sort_entries,
>> >> >> + int n_entries, unsigned int key_size)
>> >> >> +{
>> >> >> + unsigned int dups = 0, total_dups = 0;
>> >> >> + int err, i, j;
>> >> >> + void *key;
>> >> >> +
>> >> >> + if (n_entries < 2)
>> >> >> + return total_dups;
>> >> >> +
>> >> >> + sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
>> >> >> + (int (*)(const void *, const void *))cmp_entries_dup, NULL);
>> >> >> +
>> >> >> + key = sort_entries[0]->key;
>> >> >> + for (i = 1; i < n_entries; i++) {
>> >> >> + if (!memcmp(sort_entries[i]->key, key, key_size)) {
>> >> >> + dups++; total_dups++;
>> >> >> + err = merge_dup(sort_entries, i - dups, i);
>> >> >> + if (err)
>> >> >> + return err;
>> >> >> + continue;
>> >> >> + }
>> >> >> + key = sort_entries[i]->key;
>> >> >> + dups = 0;
>> >> >> + }
>> >> >> +
>> >> >> + if (!total_dups)
>> >> >> + return total_dups;
>> >> >> +
>> >> >> + for (i = 0, j = 0; i < n_entries; i++) {
>> >> >> + if (!sort_entries[i]->dup) {
>> >> >> + sort_entries[j] = sort_entries[i];
>> >> >> + if (j++ != i)
>> >> >> + sort_entries[i] = NULL;
>> >> >> + } else {
>> >> >> + destroy_sort_entry(sort_entries[i]);
>> >> >> + sort_entries[i] = NULL;
>> >> >> + }
>> >> >> + }
>> >> >> +
>> >> >> + return total_dups;
>> >> >> +}
>> >> >> +
>> >> >> +static bool is_key(struct tracing_map *map, unsigned int field_idx)
>> >> >> +{
>> >> >> + unsigned int i;
>> >> >> +
>> >> >> + for (i = 0; i < map->n_keys; i++)
>> >> >> + if (map->key_idx[i] == field_idx)
>> >> >> + return true;
>> >> >> + return false;
>> >> >> +}
>> >> >> +
>> >> >> +static void sort_secondary(struct tracing_map *map,
>> >> >> + const struct tracing_map_sort_entry **entries,
>> >> >> + unsigned int n_entries,
>> >> >> + struct tracing_map_sort_key *primary_key,
>> >> >> + struct tracing_map_sort_key *secondary_key)
>> >> >> +{
>> >> >> + int (*primary_fn)(const struct tracing_map_sort_entry **,
>> >> >> + const struct tracing_map_sort_entry **);
>> >> >> + int (*secondary_fn)(const struct tracing_map_sort_entry **,
>> >> >> + const struct tracing_map_sort_entry **);
>> >> >> + unsigned i, start = 0, n_sub = 1;
>> >> >> +
>> >> >> + if (is_key(map, primary_key->field_idx))
>> >> >> + primary_fn = cmp_entries_key;
>> >> >> + else
>> >> >> + primary_fn = cmp_entries_sum;
>> >> >> +
>> >> >> + if (is_key(map, secondary_key->field_idx))
>> >> >> + secondary_fn = cmp_entries_key;
>> >> >> + else
>> >> >> + secondary_fn = cmp_entries_sum;
>> >> >> +
>> >> >> + for (i = 0; i < n_entries - 1; i++) {
>> >> >> + const struct tracing_map_sort_entry **a = &entries[i];
>> >> >> + const struct tracing_map_sort_entry **b = &entries[i + 1];
>> >> >> +
>> >> >> + if (primary_fn(a, b) == 0) {
>> >> >> + n_sub++;
>> >> >> + if (i < n_entries - 2)
>> >> >> + continue;
>> >> >> + }
>> >> >> +
>> >> >> + if (n_sub < 2) {
>> >> >> + start = i + 1;
>> >> >> + n_sub = 1;
>> >> >> + continue;
>> >> >> + }
>> >> >> +
>> >> >> + set_sort_key(map, secondary_key);
>> >> >> + sort(&entries[start], n_sub,
>> >> >> + sizeof(struct tracing_map_sort_entry *),
>> >> >> + (int (*)(const void *, const void *))secondary_fn, NULL);
>> >> >> + set_sort_key(map, primary_key);
>> >> >> +
>> >> >> + start = i + 1;
>> >> >> + n_sub = 1;
>> >> >> + }
>> >> >> +}
>> >> >> +
>> >> >> +/**
>> >> >> + * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
>> >> >> + * @map: The tracing_map
>> >> >> + * @sort_key: The sort key to use for sorting
>> >> >> + * @sort_entries: outval: pointer to allocated and sorted array of entries
>> >> >> + *
>> >> >> + * tracing_map_sort_entries() sorts the current set of entries in the
>> >> >> + * map and returns the list of tracing_map_sort_entries containing
>> >> >> + * them to the client in the sort_entries param. The client can
>> >> >> + * access the struct tracing_map_elt element of interest directly as
>> >> >> + * the 'elt' field of a returned struct tracing_map_sort_entry object.
>> >> >> + *
>> >> >> + * The sort_key has only two fields: idx and descending. 'idx' refers
>> >> >> + * to the index of the field added via tracing_map_add_sum_field() or
>> >> >> + * tracing_map_add_key_field() when the tracing_map was initialized.
>> >> >> + * 'descending' is a flag that if set reverses the sort order, which
>> >> >> + * by default is ascending.
>> >> >> + *
>> >> >> + * The client should not hold on to the returned array but should use
>> >> >> + * it and call tracing_map_destroy_sort_entries() when done.
>> >> >> + *
>> >> >> + * Return: the number of sort_entries in the struct tracing_map_sort_entry
>> >> >> + * array, negative on error
>> >> >> + */
>> >> >> +int tracing_map_sort_entries(struct tracing_map *map,
>> >> >> + struct tracing_map_sort_key *sort_keys,
>> >> >> + unsigned int n_sort_keys,
>> >> >> + struct tracing_map_sort_entry ***sort_entries)
>> >> >> +{
>> >> >> + int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
>> >> >> + const struct tracing_map_sort_entry **);
>> >> >> + struct tracing_map_sort_entry *sort_entry, **entries;
>> >> >> + int i, n_entries, ret;
>> >> >> +
>> >> >> + entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
>> >> >> + if (!entries)
>> >> >> + return -ENOMEM;
>> >> >> +
>> >> >> + for (i = 0, n_entries = 0; i < map->map_size; i++) {
>> >> >> + if (!map->map[i].key || !map->map[i].val)
>> >> >> + continue;
>> >> >> +
>> >> >> + entries[n_entries] = create_sort_entry(map->map[i].val->key,
>> >> >> + map->map[i].val);
>> >> >> + if (!entries[n_entries++]) {
>> >> >> + ret = -ENOMEM;
>> >> >> + goto free;
>> >> >> + }
>> >> >> + }
>> >> >> +
>> >> >> + if (n_entries == 0) {
>> >> >> + ret = 0;
>> >> >> + goto free;
>> >> >> + }
>> >> >> +
>> >> >> + if (n_entries == 1) {
>> >> >> + *sort_entries = entries;
>> >> >> + return 1;
>> >> >> + }
>> >> >> +
>> >> >> + ret = merge_dups(entries, n_entries, map->key_size);
>> >> >> + if (ret < 0)
>> >> >> + goto free;
>> >> >> + n_entries -= ret;
>> >> >> +
>> >> >> + if (is_key(map, sort_keys[0].field_idx))
>> >> >> + cmp_entries_fn = cmp_entries_key;
>> >> >> + else
>> >> >> + cmp_entries_fn = cmp_entries_sum;
>> >> >> +
>> >> >> + set_sort_key(map, &sort_keys[0]);
>> >> >> +
>> >> >> + sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
>> >> >> + (int (*)(const void *, const void *))cmp_entries_fn, NULL);
>> >> >> +
>> >> >> + if (n_sort_keys > 1)
>> >> >> + sort_secondary(map,
>> >> >> + (const struct tracing_map_sort_entry **)entries,
>> >> >> + n_entries,
>> >> >> + &sort_keys[0],
>> >> >> + &sort_keys[1]);
>> >> >> +
>> >> >> + *sort_entries = entries;
>> >> >> +
>> >> >> + return n_entries;
>> >> >> + free:
>> >> >> + tracing_map_destroy_sort_entries(entries, n_entries);
>> >> >> +
>> >> >> + return ret;
>> >> >> +}
>> >> >> diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
>> >> >> new file mode 100644
>> >> >> index 0000000..2e63c5c
>> >> >> --- /dev/null
>> >> >> +++ b/kernel/trace/tracing_map.h
>> >> >> @@ -0,0 +1,258 @@
>> >> >> +#ifndef __TRACING_MAP_H
>> >> >> +#define __TRACING_MAP_H
>> >> >> +
>> >> >> +#define TRACING_MAP_BITS_DEFAULT 11
>> >> >> +#define TRACING_MAP_BITS_MAX 17
>> >> >> +#define TRACING_MAP_BITS_MIN 7
>> >> >> +
>> >> >> +#define TRACING_MAP_FIELDS_MAX 4
>> >> >> +#define TRACING_MAP_KEYS_MAX 2
>> >> >> +
>> >> >> +#define TRACING_MAP_SORT_KEYS_MAX 2
>> >> >> +
>> >> >> +typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
>> >> >> +
>> >> >> +/*
>> >> >> + * This is an overview of the tracing_map data structures and how they
>> >> >> + * relate to the tracing_map API. The details of the algorithms
>> >> >> + * aren't discussed here - this is just a general overview of the data
>> >> >> + * structures and how they interact with the API.
>> >> >> + *
>> >> >> + * The central data structure of the tracing_map is an initially
>> >> >> + * zeroed array of struct tracing_map_entry (stored in the map field
>> >> >> + * of struct tracing_map). tracing_map_entry is a very simple data
>> >> >> + * structure containing only two fields: a 32-bit unsigned 'key'
>> >> >> + * variable and a pointer named 'val'. This array of struct
>> >> >> + * tracing_map_entry is essentially a hash table which will be
>> >> >> + * modified by a single function, tracing_map_insert(), but which can
>> >> >> + * be traversed and read by a user at any time (though the user does
>> >> >> + * this indirectly via an array of tracing_map_sort_entry - see the
>> >> >> + * explanation of that data structure in the discussion of the
>> >> >> + * sorting-related data structures below).
>> >> >> + *
>> >> >> + * The central function of the tracing_map API is
>> >> >> + * tracing_map_insert(). tracing_map_insert() hashes the
>> >> >> + * arbitrarily-sized key passed into it into a 32-bit unsigned key.
>> >> >> + * It then uses this key, truncated to the array size, as an index
>> >> >> + * into the array of tracing_map_entries. If the value of the 'key'
>> >> >> + * field of the tracing_map_entry found at that location is 0, then
>> >> >> + * that entry is considered to be free and can be claimed, by
>> >> >> + * replacing the 0 in the 'key' field of the tracing_map_entry with
>> >> >> + * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
>> >> >> + * 'val' field is then used to store a unique element which will be
>> >> >> + * forever associated with that 32-bit hashed key in the
>> >> >> + * tracing_map_entry.
>> >> >> + *
>> >> >> + * That unique element now in the tracing_map_entry's 'val' field is
>> >> >> + * an instance of tracing_map_elt, where 'elt' in the latter part of
>> >> >> + * that variable name is short for 'element'. The purpose of a
>> >> >> + * tracing_map_elt is to hold values specific to the the particular
>> >> >> + * 32-bit hashed key it's assocated with. Things such as the unique
>> >> >> + * set of aggregated sums associated with the 32-bit hashed key, along
>> >> >> + * with a copy of the full key associated with the entry, and which
>> >> >> + * was used to produce the 32-bit hashed key.
>> >> >> + *
>> >> >> + * When tracing_map_create() is called to create the tracing map, the
>> >> >> + * user specifies (indirectly via the map_bits param, the details are
>> >> >> + * unimportant for this discussion) the maximum number of elements
>> >> >> + * that the map can hold (stored in the max_elts field of struct
>> >> >> + * tracing_map). This is the maximum possible number of
>> >> >> + * tracing_map_entries in the tracing_map_entry array which can be
>> >> >> + * 'claimed' as described in the above discussion, and therefore is
>> >> >> + * also the maximum number of tracing_map_elts that can be associated
>> >> >> + * with the tracing_map_entry array in the tracing_map. Because of
>> >> >> + * the way the insertion algorithm works, the size of the allocated
>> >> >> + * tracing_map_entry array is always twice the maximum number of
>> >> >> + * elements (2 * max_elts). This value is stored in the map_size
>> >> >> + * field of struct tracing_map.
>> >> >> + *
>> >> >> + * Because tracing_map_insert() needs to work from any context,
>> >> >> + * including from within the memory allocation functions themselves,
>> >> >> + * both the tracing_map_entry array and a pool of max_elts
>> >> >> + * tracing_map_elts are pre-allocated before any call is made to
>> >> >> + * tracing_map_insert().
>> >> >> + *
>> >> >> + * The tracing_map_entry array is allocated as a single block by
>> >> >> + * tracing_map_create().
>> >> >> + *
>> >> >> + * Because the tracing_map_elts are much larger objects and can't
>> >> >> + * generally be allocated together as a single large array without
>> >> >> + * failure, they're allocated individually, by tracing_map_init().
>> >> >> + *
>> >> >> + * The pool of tracing_map_elts are allocated by tracing_map_init()
>> >> >> + * rather than by tracing_map_create() because at the time
>> >> >> + * tracing_map_create() is called, there isn't enough information to
>> >> >> + * create the tracing_map_elts. Specifically,the user first needs to
>> >> >> + * tell the tracing_map implementation how many fields the
>> >> >> + * tracing_map_elts contain, and which types of fields they are (key
>> >> >> + * or sum). The user does this via the tracing_map_add_sum_field()
>> >> >> + * and tracing_map_add_key_field() functions, following which the user
>> >> >> + * calls tracing_map_init() to finish up the tracing map setup. The
>> >> >> + * array holding the pointers which make up the pre-allocated pool of
>> >> >> + * tracing_map_elts is allocated as a single block and is stored in
>> >> >> + * the elts field of struct tracing_map.
>> >> >> + *
>> >> >> + * There is also a set of structures used for sorting that might
>> >> >> + * benefit from some minimal explanation.
>> >> >> + *
>> >> >> + * struct tracing_map_sort_key is used to drive the sort at any given
>> >> >> + * time. By 'any given time' we mean that a different
>> >> >> + * tracing_map_sort_key will be used at different times depending on
>> >> >> + * whether the sort currently being performed is a primary or a
>> >> >> + * secondary sort.
>> >> >> + *
>> >> >> + * The sort key is very simple, consisting of the field index of the
>> >> >> + * tracing_map_elt field to sort on (which the user saved when adding
>> >> >> + * the field), and whether the sort should be done in an ascending or
>> >> >> + * descending order.
>> >> >> + *
>> >> >> + * For the convenience of the sorting code, a tracing_map_sort_entry
>> >> >> + * is created for each tracing_map_elt, again individually allocated
>> >> >> + * to avoid failures that might be expected if allocated as a single
>> >> >> + * large array of struct tracing_map_sort_entry.
>> >> >> + * tracing_map_sort_entry instances are the objects expected by the
>> >> >> + * various internal sorting functions, and are also what the user
>> >> >> + * ultimately receives after calling tracing_map_sort_entries().
>> >> >> + * Because it doesn't make sense for users to access an unordered and
>> >> >> + * sparsely populated tracing_map directly, the
>> >> >> + * tracing_map_sort_entries() function is provided so that users can
>> >> >> + * retrieve a sorted list of all existing elements. In addition to
>> >> >> + * the associated tracing_map_elt 'elt' field contained within the
>> >> >> + * tracing_map_sort_entry, which is the object of interest to the
>> >> >> + * user, tracing_map_sort_entry objects contain a number of additional
>> >> >> + * fields which are used for caching and internal purposes and can
>> >> >> + * safely be ignored.
>> >> >> +*/
>> >> >> +
>> >> >> +struct tracing_map_field {
>> >> >> + tracing_map_cmp_fn_t cmp_fn;
>> >> >> + union {
>> >> >> + atomic64_t sum;
>> >> >> + unsigned int offset;
>> >> >> + };
>> >> >> +};
>> >> >> +
>> >> >> +struct tracing_map_elt {
>> >> >> + struct tracing_map *map;
>> >> >> + struct tracing_map_field *fields;
>> >> >> + void *key;
>> >> >> + void *private_data;
>> >> >> +};
>> >> >> +
>> >> >> +struct tracing_map_entry {
>> >> >> + u32 key;
>> >> >> + struct tracing_map_elt *val;
>> >> >> +};
>> >> >> +
>> >> >> +struct tracing_map_sort_key {
>> >> >> + unsigned int field_idx;
>> >> >> + bool descending;
>> >> >> +};
>> >> >> +
>> >> >> +struct tracing_map_sort_entry {
>> >> >> + void *key;
>> >> >> + struct tracing_map_elt *elt;
>> >> >> + bool elt_copied;
>> >> >> + bool dup;
>> >> >> +};
>> >> >> +
>> >> >> +struct tracing_map {
>> >> >> + unsigned int key_size;
>> >> >> + unsigned int map_bits;
>> >> >> + unsigned int map_size;
>> >> >> + unsigned int max_elts;
>> >> >> + atomic_t next_elt;
>> >> >> + struct tracing_map_elt **elts;
>> >> >> + struct tracing_map_entry *map;
>> >> >> + struct tracing_map_ops *ops;
>> >> >> + void *private_data;
>> >> >> + struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
>> >> >> + unsigned int n_fields;
>> >> >> + int key_idx[TRACING_MAP_KEYS_MAX];
>> >> >> + unsigned int n_keys;
>> >> >> + struct tracing_map_sort_key sort_key;
>> >> >> +};
>> >> >> +
>> >> >> +/**
>> >> >> + * struct tracing_map_ops - callbacks for tracing_map
>> >> >> + *
>> >> >> + * The methods in this structure define callback functions for various
>> >> >> + * operations on a tracing_map or objects related to a tracing_map.
>> >> >> + *
>> >> >> + * For a detailed description of tracing_map_elt objects please see
>> >> >> + * the overview of tracing_map data structures at the beginning of
>> >> >> + * this file.
>> >> >> + *
>> >> >> + * All the methods below are optional.
>> >> >> + *
>> >> >> + * @elt_alloc: When a tracing_map_elt is allocated, this function, if
>> >> >> + * defined, will be called and gives clients the opportunity to
>> >> >> + * allocate additional data and attach it to the element
>> >> >> + * (tracing_map_elt->private_data is meant for that purpose).
>> >> >> + * Element allocation occurs before tracing begins, when the
>> >> >> + * tracing_map_init() call is made by client code.
>> >> >> + *
>> >> >> + * @elt_copy: At certain points in the lifetime of an element, it may
>> >> >> + * need to be copied. The copy should include a copy of the
>> >> >> + * client-allocated data, which can be copied into the 'to'
>> >> >> + * element from the 'from' element.
>> >> >> + *
>> >> >> + * @elt_free: When a tracing_map_elt is freed, this function is called
>> >> >> + * and allows client-allocated per-element data to be freed.
>> >> >> + *
>> >> >> + * @elt_clear: This callback allows per-element client-defined data to
>> >> >> + * be cleared, if applicable.
>> >> >> + *
>> >> >> + * @elt_init: This callback allows per-element client-defined data to
>> >> >> + * be initialized when used i.e. when the element is actually
>> >> >> + * claimed by tracing_map_insert() in the context of the map
>> >> >> + * insertion.
>> >> >> + */
>> >> >> +struct tracing_map_ops {
>> >> >> + int (*elt_alloc)(struct tracing_map_elt *elt);
>> >> >> + void (*elt_copy)(struct tracing_map_elt *to,
>> >> >> + struct tracing_map_elt *from);
>> >> >> + void (*elt_free)(struct tracing_map_elt *elt);
>> >> >> + void (*elt_clear)(struct tracing_map_elt *elt);
>> >> >> + void (*elt_init)(struct tracing_map_elt *elt);
>> >> >> +};
>> >> >> +
>> >> >> +extern struct tracing_map *tracing_map_create(unsigned int map_bits,
>> >> >> + unsigned int key_size,
>> >> >> + struct tracing_map_ops *ops,
>> >> >> + void *private_data);
>> >> >> +extern int tracing_map_init(struct tracing_map *map);
>> >> >> +
>> >> >> +extern int tracing_map_add_sum_field(struct tracing_map *map);
>> >> >> +extern int tracing_map_add_key_field(struct tracing_map *map,
>> >> >> + unsigned int offset,
>> >> >> + tracing_map_cmp_fn_t cmp_fn);
>> >> >> +
>> >> >> +extern void tracing_map_destroy(struct tracing_map *map);
>> >> >> +extern void tracing_map_clear(struct tracing_map *map);
>> >> >> +
>> >> >> +extern struct tracing_map_elt *
>> >> >> +tracing_map_insert(struct tracing_map *map, void *key);
>> >> >> +
>> >> >> +extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
>> >> >> + int field_is_signed);
>> >> >> +extern int tracing_map_cmp_string(void *val_a, void *val_b);
>> >> >> +extern int tracing_map_cmp_none(void *val_a, void *val_b);
>> >> >> +
>> >> >> +extern void tracing_map_update_sum(struct tracing_map_elt *elt,
>> >> >> + unsigned int i, u64 n);
>> >> >> +extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
>> >> >> +extern void tracing_map_set_field_descr(struct tracing_map *map,
>> >> >> + unsigned int i,
>> >> >> + unsigned int key_offset,
>> >> >> + tracing_map_cmp_fn_t cmp_fn);
>> >> >> +extern int
>> >> >> +tracing_map_sort_entries(struct tracing_map *map,
>> >> >> + struct tracing_map_sort_key *sort_keys,
>> >> >> + unsigned int n_sort_keys,
>> >> >> + struct tracing_map_sort_entry ***sort_entries);
>> >> >> +
>> >> >> +extern void
>> >> >> +tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
>> >> >> + unsigned int n_entries);
>> >> >> +#endif /* __TRACING_MAP_H */
>> >> >> --
>> >> >> 1.9.3
>> >> >>
>> >> >> --
>> >> >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> >> >> the body of a message to [email protected]
>> >> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> >> >> Please read the FAQ at http://www.tux.org/lkml/
>> >> >
>> >> > --
>> >> > Mathieu Desnoyers
>> >> > EfficiOS Inc.
>> >> > http://www.efficios.com

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2015-07-18 12:52:57

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 07/22] tracing: Add lock-free tracing_map

On Sat, 2015-07-18 at 02:40 +0000, Mathieu Desnoyers wrote:
> ----- On Jul 17, 2015, at 7:44 PM, Tom Zanussi [email protected] wrote:
>
> > On Fri, 2015-07-17 at 15:48 +0000, Mathieu Desnoyers wrote:
> >> ----- On Jul 16, 2015, at 9:35 PM, Tom Zanussi [email protected]
> >> wrote:
> >>
> >> > Hi Mathieu,
> >> >
> >> > On Thu, 2015-07-16 at 23:25 +0000, Mathieu Desnoyers wrote:
> >> >> * Tom Zanussi wrote:
> >> >> >> Add tracing_map, a special-purpose lock-free map for tracing.
> >> >> >>
> >> >> >> tracing_map is designed to aggregate or 'sum' one or more values
> >> >> >> associated with a specific object of type tracing_map_elt, which
> >> >> >> is associated by the map to a given key.
> >> >> >>
> >> >> >> It provides various hooks allowing per-tracer customization and is
> >> >> >> separated out into a separate file in order to allow it to be shared
> >> >> >> between multiple tracers, but isn't meant to be generally used outside
> >> >> >> of that context.
> >> >> >>
> >> >> >> The tracing_map implementation was inspired by lock-free map
> >> >> >> algorithms originated by Dr. Cliff Click:
> >> >> >>
> >> >> >> http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
> >> >> >> http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
> >> >>
> >> >> Hi Tom,
> >> >>
> >> >> First question: what is the rationale for implementing another
> >> >> hash table from scratch here ? What is missing in the pre-existing
> >> >> hash table implementations ?
> >> >>
> >> >
> >> > None of the other hash tables allow for lock-free insertion (and I
> >> > didn't see an easy way to add it).
> >>
> >> This is one of the nice things about the Userspace RCU lock-free hash
> >> table we've done a few years ago: it provides lock-free add, add_unique,
> >> removal, and replace, as well as RCU wait-free lookups and traversals.
> >> Resize can be done concurrently by a worker thread. I ported it to the
> >> Linux kernel for Julien's work on latency tracker. You can find the
> >> implementation here: https://github.com/jdesfossez/latency_tracker
> >> (see rculfhash*)
> >> It is a simplified version that has the "resize" feature removed for
> >> simplicity sake. The "insert and lookup" feature you need is called
> >> "add_unique" in our API: it behaves both as a lookup and as an atomic
> >> insert if the key is not found.
> >>
> >
> > Interesting, but it's just as much not upstream as mine is. ;-)
>
> Fair point, although the userspace RCU lock-free hash table has been
> heavily tested and used in user-space in the context of routing tables
> (I remember Stephen Hemminger uses it at Vyatta). So being in userspace
> RCU library got it some exposure and use over years.
>
> >
> > From the perspective of the hist triggers, it doesn't matter what hash
> > table implementation it uses as long as whatever it is supports
> > insertion in any context. In fact the current tracing_map
> > implementation is already the second completely different implementation
> > it's plugged into (see v2 of this patchset for the first). If yours is
> > better and going upstream, I'd be happy to make it the third and forget
> > about mine.
>
> I'm a big fan of awaiting that people show a need before proposing
> something for Linux mainline. Having a lock-free hash table for the
> sake of tracing triggers appears to be the perfect use-case. If you
> think it can be useful to you, I can look into doing a proper port
> to the Linux kernel.
>

And I'm a big fan of creating and reusing common components for multiple
use-cases (see my v2 attempt, and I'm guessing you might have some uses
for it too ;-)

So yeah, I'd be happy to try it out as a replacement for tracing_map in
the hist triggers.

Tom

> Thanks,
>
> Mathieu
>
> >
> > Tom
> >
> >> Thanks,
> >>
> >> Mathieu
> >>
> >> >
> >> >> Moreover, you might want to handle the case where jhash() returns
> >> >> 0. AFAIU, there is a race on "insert" in this scenario.
> >> >>
> >> >
> >> > You're right, in that case you'd accidentally overwrite an already
> >> > claimed slot. Thanks for pointing that out.
> >> >
> >> > Tom
> >> >
> >> >> Thanks,
> >> >>
> >> >> Mathieu
> >> >>
> >> >> >>
> >> >> >> Signed-off-by: Tom Zanussi <[email protected]>
> >> >> >> ---
> >> >> >> kernel/trace/Makefile | 1 +
> >> >> >> kernel/trace/tracing_map.c | 935 +++++++++++++++++++++++++++++++++++++++++++++
> >> >> >> kernel/trace/tracing_map.h | 258 +++++++++++++
> >> >> >> 3 files changed, 1194 insertions(+)
> >> >> >> create mode 100644 kernel/trace/tracing_map.c
> >> >> >> create mode 100644 kernel/trace/tracing_map.h
> >> >> >>
> >> >> >> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
> >> >> >> index 9b1044e..3b26cfb 100644
> >> >> >> --- a/kernel/trace/Makefile
> >> >> >> +++ b/kernel/trace/Makefile
> >> >> >> @@ -31,6 +31,7 @@ obj-$(CONFIG_TRACING) += trace_output.o
> >> >> >> obj-$(CONFIG_TRACING) += trace_seq.o
> >> >> >> obj-$(CONFIG_TRACING) += trace_stat.o
> >> >> >> obj-$(CONFIG_TRACING) += trace_printk.o
> >> >> >> +obj-$(CONFIG_TRACING) += tracing_map.o
> >> >> >> obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
> >> >> >> obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
> >> >> >> obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
> >> >> >> diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
> >> >> >> new file mode 100644
> >> >> >> index 0000000..a505025
> >> >> >> --- /dev/null
> >> >> >> +++ b/kernel/trace/tracing_map.c
> >> >> >> @@ -0,0 +1,935 @@
> >> >> >> +/*
> >> >> >> + * tracing_map - lock-free map for tracing
> >> >> >> + *
> >> >> >> + * This program is free software; you can redistribute it and/or modify
> >> >> >> + * it under the terms of the GNU General Public License as published by
> >> >> >> + * the Free Software Foundation; either version 2 of the License, or
> >> >> >> + * (at your option) any later version.
> >> >> >> + *
> >> >> >> + * This program is distributed in the hope that it will be useful,
> >> >> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> >> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> >> >> >> + * GNU General Public License for more details.
> >> >> >> + *
> >> >> >> + * Copyright (C) 2015 Tom Zanussi <[email protected]>
> >> >> >> + *
> >> >> >> + * tracing_map implementation inspired by lock-free map algorithms
> >> >> >> + * originated by Dr. Cliff Click:
> >> >> >> + *
> >> >> >> + * http://www.azulsystems.com/blog/cliff/2007-03-26-non-blocking-hashtable
> >> >> >> + * http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf
> >> >> >> + */
> >> >> >> +
> >> >> >> +#include <linux/slab.h>
> >> >> >> +#include <linux/jhash.h>
> >> >> >> +#include <linux/sort.h>
> >> >> >> +
> >> >> >> +#include "tracing_map.h"
> >> >> >> +#include "trace.h"
> >> >> >> +
> >> >> >> +/*
> >> >> >> + * NOTE: For a detailed description of the data structures used by
> >> >> >> + * these functions (such as tracing_map_elt) please see the overview
> >> >> >> + * of tracing_map data structures at the beginning of tracing_map.h.
> >> >> >> + */
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_update_sum - Add a value to a tracing_map_elt's sum field
> >> >> >> + * @elt: The tracing_map_elt
> >> >> >> + * @i: The index of the given sum associated with the tracing_map_elt
> >> >> >> + * @n: The value to add to the sum
> >> >> >> + *
> >> >> >> + * Add n to sum i associated with the specified tracing_map_elt
> >> >> >> + * instance. The index i is the index returned by the call to
> >> >> >> + * tracing_map_add_sum_field() when the tracing map was set up.
> >> >> >> + */
> >> >> >> +void tracing_map_update_sum(struct tracing_map_elt *elt, unsigned int i, u64 n)
> >> >> >> +{
> >> >> >> + atomic64_add(n, &elt->fields[i].sum);
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_read_sum - Return the value of a tracing_map_elt's sum field
> >> >> >> + * @elt: The tracing_map_elt
> >> >> >> + * @i: The index of the given sum associated with the tracing_map_elt
> >> >> >> + *
> >> >> >> + * Retrieve the value of the sum i associated with the specified
> >> >> >> + * tracing_map_elt instance. The index i is the index returned by the
> >> >> >> + * call to tracing_map_add_sum_field() when the tracing map was set
> >> >> >> + * up.
> >> >> >> + *
> >> >> >> + * Return: The sum associated with field i for elt.
> >> >> >> + */
> >> >> >> +u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i)
> >> >> >> +{
> >> >> >> + return (u64)atomic64_read(&elt->fields[i].sum);
> >> >> >> +}
> >> >> >> +
> >> >> >> +int tracing_map_cmp_string(void *val_a, void *val_b)
> >> >> >> +{
> >> >> >> + char *a = val_a;
> >> >> >> + char *b = val_b;
> >> >> >> +
> >> >> >> + return strcmp(a, b);
> >> >> >> +}
> >> >> >> +
> >> >> >> +int tracing_map_cmp_none(void *val_a, void *val_b)
> >> >> >> +{
> >> >> >> + return 0;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int tracing_map_cmp_atomic64(void *val_a, void *val_b)
> >> >> >> +{
> >> >> >> + u64 a = atomic64_read((atomic64_t *)val_a);
> >> >> >> + u64 b = atomic64_read((atomic64_t *)val_b);
> >> >> >> +
> >> >> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0);
> >> >> >> +}
> >> >> >> +
> >> >> >> +#define DEFINE_TRACING_MAP_CMP_FN(type) \
> >> >> >> +static int tracing_map_cmp_##type(void *val_a, void *val_b) \
> >> >> >> +{ \
> >> >> >> + type a = *(type *)val_a; \
> >> >> >> + type b = *(type *)val_b; \
> >> >> >> + \
> >> >> >> + return (a > b) ? 1 : ((a < b) ? -1 : 0); \
> >> >> >> +}
> >> >> >> +
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(s64);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(u64);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(s32);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(u32);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(s16);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(u16);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(s8);
> >> >> >> +DEFINE_TRACING_MAP_CMP_FN(u8);
> >> >> >> +
> >> >> >> +tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
> >> >> >> + int field_is_signed)
> >> >> >> +{
> >> >> >> + tracing_map_cmp_fn_t fn = tracing_map_cmp_none;
> >> >> >> +
> >> >> >> + switch (field_size) {
> >> >> >> + case 8:
> >> >> >> + if (field_is_signed)
> >> >> >> + fn = tracing_map_cmp_s64;
> >> >> >> + else
> >> >> >> + fn = tracing_map_cmp_u64;
> >> >> >> + break;
> >> >> >> + case 4:
> >> >> >> + if (field_is_signed)
> >> >> >> + fn = tracing_map_cmp_s32;
> >> >> >> + else
> >> >> >> + fn = tracing_map_cmp_u32;
> >> >> >> + break;
> >> >> >> + case 2:
> >> >> >> + if (field_is_signed)
> >> >> >> + fn = tracing_map_cmp_s16;
> >> >> >> + else
> >> >> >> + fn = tracing_map_cmp_u16;
> >> >> >> + break;
> >> >> >> + case 1:
> >> >> >> + if (field_is_signed)
> >> >> >> + fn = tracing_map_cmp_s8;
> >> >> >> + else
> >> >> >> + fn = tracing_map_cmp_u8;
> >> >> >> + break;
> >> >> >> + }
> >> >> >> +
> >> >> >> + return fn;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int tracing_map_add_field(struct tracing_map *map,
> >> >> >> + tracing_map_cmp_fn_t cmp_fn)
> >> >> >> +{
> >> >> >> + int ret = -EINVAL;
> >> >> >> +
> >> >> >> + if (map->n_fields < TRACING_MAP_FIELDS_MAX) {
> >> >> >> + ret = map->n_fields;
> >> >> >> + map->fields[map->n_fields++].cmp_fn = cmp_fn;
> >> >> >> + }
> >> >> >> +
> >> >> >> + return ret;
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_add_sum_field - Add a field describing a tracing_map sum
> >> >> >> + * @map: The tracing_map
> >> >> >> + *
> >> >> >> + * Add a sum field to the key and return the index identifying it in
> >> >> >> + * the map and associated tracing_map_elts. This is the index used
> >> >> >> + * for instance to update a sum for a particular tracing_map_elt using
> >> >> >> + * tracing_map_update_sum() or reading it via tracing_map_read_sum().
> >> >> >> + *
> >> >> >> + * Return: The index identifying the field in the map and associated
> >> >> >> + * tracing_map_elts.
> >> >> >> + */
> >> >> >> +int tracing_map_add_sum_field(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + return tracing_map_add_field(map, tracing_map_cmp_atomic64);
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_add_key_field - Add a field describing a tracing_map key
> >> >> >> + * @map: The tracing_map
> >> >> >> + * @offset: The offset within the key
> >> >> >> + * @cmp_fn: The comparison function that will be used to sort on the key
> >> >> >> + *
> >> >> >> + * Let the map know there is a key and that if it's used as a sort key
> >> >> >> + * to use cmp_fn.
> >> >> >> + *
> >> >> >> + * A key can be a subset of a compound key; for that purpose, the
> >> >> >> + * offset param is used to describe where within the the compound key
> >> >> >> + * the key referenced by this key field resides.
> >> >> >> + *
> >> >> >> + * Return: The index identifying the field in the map and associated
> >> >> >> + * tracing_map_elts.
> >> >> >> + */
> >> >> >> +int tracing_map_add_key_field(struct tracing_map *map,
> >> >> >> + unsigned int offset,
> >> >> >> + tracing_map_cmp_fn_t cmp_fn)
> >> >> >> +
> >> >> >> +{
> >> >> >> + int idx = tracing_map_add_field(map, cmp_fn);
> >> >> >> +
> >> >> >> + if (idx < 0)
> >> >> >> + return idx;
> >> >> >> +
> >> >> >> + map->fields[idx].offset = offset;
> >> >> >> +
> >> >> >> + map->key_idx[map->n_keys++] = idx;
> >> >> >> +
> >> >> >> + return idx;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void tracing_map_elt_clear(struct tracing_map_elt *elt)
> >> >> >> +{
> >> >> >> + unsigned i;
> >> >> >> +
> >> >> >> + for (i = 0; i < elt->map->n_fields; i++)
> >> >> >> + if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
> >> >> >> + atomic64_set(&elt->fields[i].sum, 0);
> >> >> >> +
> >> >> >> + if (elt->map->ops && elt->map->ops->elt_clear)
> >> >> >> + elt->map->ops->elt_clear(elt);
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void tracing_map_elt_init_fields(struct tracing_map_elt *elt)
> >> >> >> +{
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + tracing_map_elt_clear(elt);
> >> >> >> +
> >> >> >> + for (i = 0; i < elt->map->n_fields; i++) {
> >> >> >> + elt->fields[i].cmp_fn = elt->map->fields[i].cmp_fn;
> >> >> >> +
> >> >> >> + if (elt->fields[i].cmp_fn != tracing_map_cmp_atomic64)
> >> >> >> + elt->fields[i].offset = elt->map->fields[i].offset;
> >> >> >> + }
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void tracing_map_elt_free(struct tracing_map_elt *elt)
> >> >> >> +{
> >> >> >> + if (!elt)
> >> >> >> + return;
> >> >> >> +
> >> >> >> + if (elt->map->ops && elt->map->ops->elt_free)
> >> >> >> + elt->map->ops->elt_free(elt);
> >> >> >> + kfree(elt->fields);
> >> >> >> + kfree(elt->key);
> >> >> >> + kfree(elt);
> >> >> >> +}
> >> >> >> +
> >> >> >> +static struct tracing_map_elt *tracing_map_elt_alloc(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + struct tracing_map_elt *elt;
> >> >> >> + int err = 0;
> >> >> >> +
> >> >> >> + elt = kzalloc(sizeof(*elt), GFP_KERNEL);
> >> >> >> + if (!elt)
> >> >> >> + return ERR_PTR(-ENOMEM);
> >> >> >> +
> >> >> >> + elt->map = map;
> >> >> >> +
> >> >> >> + elt->key = kzalloc(map->key_size, GFP_KERNEL);
> >> >> >> + if (!elt->key) {
> >> >> >> + err = -ENOMEM;
> >> >> >> + goto free;
> >> >> >> + }
> >> >> >> +
> >> >> >> + elt->fields = kcalloc(map->n_fields, sizeof(*elt->fields), GFP_KERNEL);
> >> >> >> + if (!elt->fields) {
> >> >> >> + err = -ENOMEM;
> >> >> >> + goto free;
> >> >> >> + }
> >> >> >> +
> >> >> >> + tracing_map_elt_init_fields(elt);
> >> >> >> +
> >> >> >> + if (map->ops && map->ops->elt_alloc) {
> >> >> >> + err = map->ops->elt_alloc(elt);
> >> >> >> + if (err)
> >> >> >> + goto free;
> >> >> >> + }
> >> >> >> + return elt;
> >> >> >> + free:
> >> >> >> + tracing_map_elt_free(elt);
> >> >> >> +
> >> >> >> + return ERR_PTR(err);
> >> >> >> +}
> >> >> >> +
> >> >> >> +static struct tracing_map_elt *get_free_elt(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + struct tracing_map_elt *elt = NULL;
> >> >> >> + int idx;
> >> >> >> +
> >> >> >> + idx = atomic_inc_return(&map->next_elt);
> >> >> >> + if (idx < map->max_elts) {
> >> >> >> + elt = map->elts[idx];
> >> >> >> + if (map->ops && map->ops->elt_init)
> >> >> >> + map->ops->elt_init(elt);
> >> >> >> + }
> >> >> >> +
> >> >> >> + return elt;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void tracing_map_free_elts(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + if (!map->elts)
> >> >> >> + return;
> >> >> >> +
> >> >> >> + for (i = 0; i < map->max_elts; i++)
> >> >> >> + tracing_map_elt_free(map->elts[i]);
> >> >> >> +
> >> >> >> + kfree(map->elts);
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int tracing_map_alloc_elts(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + map->elts = kcalloc(map->max_elts, sizeof(struct tracing_map_elt *),
> >> >> >> + GFP_KERNEL);
> >> >> >> + if (!map->elts)
> >> >> >> + return -ENOMEM;
> >> >> >> +
> >> >> >> + for (i = 0; i < map->max_elts; i++) {
> >> >> >> + map->elts[i] = tracing_map_elt_alloc(map);
> >> >> >> + if (!map->elts[i]) {
> >> >> >> + tracing_map_free_elts(map);
> >> >> >> +
> >> >> >> + return -ENOMEM;
> >> >> >> + }
> >> >> >> + }
> >> >> >> +
> >> >> >> + return 0;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static inline bool keys_match(void *key, void *test_key, unsigned key_size)
> >> >> >> +{
> >> >> >> + bool match = true;
> >> >> >> +
> >> >> >> + if (memcmp(key, test_key, key_size))
> >> >> >> + match = false;
> >> >> >> +
> >> >> >> + return match;
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_insert - Insert key and/or retrieve val from a tracing_map
> >> >> >> + * @map: The tracing_map to insert into
> >> >> >> + * @key: The key to insert
> >> >> >> + *
> >> >> >> + * Inserts a key into a tracing_map and creates and returns a new
> >> >> >> + * tracing_map_elt for it, or if the key has already been inserted by
> >> >> >> + * a previous call, returns the tracing_map_elt already associated
> >> >> >> + * with it. When the map was created, the number of elements to be
> >> >> >> + * allocated for the map was specified (internally maintained as
> >> >> >> + * 'max_elts' in struct tracing_map), and that number of
> >> >> >> + * tracing_map_elts was created by tracing_map_init(). This is the
> >> >> >> + * pre-allocated pool of tracing_map_elts that tracing_map_insert()
> >> >> >> + * will allocate from when adding new keys. Once that pool is
> >> >> >> + * exhausted, tracing_map_insert() is useless and will return NULL to
> >> >> >> + * signal that state.
> >> >> >> + *
> >> >> >> + * This is a lock-free tracing map insertion function implementing a
> >> >> >> + * modified form of Cliff Click's basic insertion algorithm. It
> >> >> >> + * requires the table size be a power of two. To prevent any
> >> >> >> + * possibility of an infinite loop we always make the internal table
> >> >> >> + * size double the size of the requested table size (max_elts * 2).
> >> >> >> + * Likewise, we never reuse a slot or resize or delete elements - when
> >> >> >> + * we've reached max_elts entries, we simply return NULL once we've
> >> >> >> + * run out of entries. Readers can at any point in time traverse the
> >> >> >> + * tracing map and safely access the key/val pairs.
> >> >> >> + *
> >> >> >> + * Return: the tracing_map_elt pointer val associated with the key.
> >> >> >> + * If this was a newly inserted key, the val will be a newly allocated
> >> >> >> + * and associated tracing_map_elt pointer val. If the key wasn't
> >> >> >> + * found and the pool of tracing_map_elts has been exhausted, NULL is
> >> >> >> + * returned and no further insertions will succeed.
> >> >> >> + */
> >> >> >> +struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key)
> >> >> >> +{
> >> >> >> + u32 idx, key_hash, test_key;
> >> >> >> +
> >> >> >> + key_hash = jhash(key, map->key_size, 0);
> >> >> >> + idx = key_hash >> (32 - (map->map_bits + 1));
> >> >> >> +
> >> >> >> + while (1) {
> >> >> >> + idx &= (map->map_size - 1);
> >> >> >> + test_key = map->map[idx].key;
> >> >> >> +
> >> >> >> + if (test_key && test_key == key_hash && map->map[idx].val &&
> >> >> >> + keys_match(key, map->map[idx].val->key, map->key_size))
> >> >> >> + return map->map[idx].val;
> >> >> >> +
> >> >> >> + if (!test_key && !cmpxchg(&map->map[idx].key, 0, key_hash)) {
> >> >> >> + struct tracing_map_elt *elt;
> >> >> >> +
> >> >> >> + elt = get_free_elt(map);
> >> >> >> + if (!elt)
> >> >> >> + break;
> >> >> >> + memcpy(elt->key, key, map->key_size);
> >> >> >> + map->map[idx].val = elt;
> >> >> >> +
> >> >> >> + return map->map[idx].val;
> >> >> >> + }
> >> >> >> + idx++;
> >> >> >> + }
> >> >> >> +
> >> >> >> + return NULL;
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_destroy - Destroy a tracing_map
> >> >> >> + * @map: The tracing_map to destroy
> >> >> >> + *
> >> >> >> + * Frees a tracing_map along with its associated array of
> >> >> >> + * tracing_map_elts.
> >> >> >> + *
> >> >> >> + * Callers should make sure there are no readers or writers actively
> >> >> >> + * reading or inserting into the map before calling this.
> >> >> >> + */
> >> >> >> +void tracing_map_destroy(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + if (!map)
> >> >> >> + return;
> >> >> >> +
> >> >> >> + tracing_map_free_elts(map);
> >> >> >> +
> >> >> >> + kfree(map->map);
> >> >> >> + kfree(map);
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_clear - Clear a tracing_map
> >> >> >> + * @map: The tracing_map to clear
> >> >> >> + *
> >> >> >> + * Resets the tracing map to a cleared or initial state. The
> >> >> >> + * tracing_map_elts are all cleared, and the array of struct
> >> >> >> + * tracing_map_entry is reset to an initialized state.
> >> >> >> + *
> >> >> >> + * Callers should make sure there are no writers actively inserting
> >> >> >> + * into the map before calling this.
> >> >> >> + */
> >> >> >> +void tracing_map_clear(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + unsigned int i, size;
> >> >> >> +
> >> >> >> + atomic_set(&map->next_elt, -1);
> >> >> >> +
> >> >> >> + size = map->map_size * sizeof(struct tracing_map_entry);
> >> >> >> + memset(map->map, 0, size);
> >> >> >> +
> >> >> >> + for (i = 0; i < map->max_elts; i++)
> >> >> >> + tracing_map_elt_clear(map->elts[i]);
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void set_sort_key(struct tracing_map *map,
> >> >> >> + struct tracing_map_sort_key *sort_key)
> >> >> >> +{
> >> >> >> + map->sort_key = *sort_key;
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_create - Create a lock-free map and element pool
> >> >> >> + * @map_bits: The size of the map (2 ** map_bits)
> >> >> >> + * @key_size: The size of the key for the map in bytes
> >> >> >> + * @ops: Optional client-defined tracing_map_ops instance
> >> >> >> + * @private_data: Client data associated with the map
> >> >> >> + *
> >> >> >> + * Creates and sets up a map to contain 2 ** map_bits number of
> >> >> >> + * elements (internally maintained as 'max_elts' in struct
> >> >> >> + * tracing_map). Before using, map fields should be added to the map
> >> >> >> + * with tracing_map_add_sum_field() and tracing_map_add_key_field().
> >> >> >> + * tracing_map_init() should then be called to allocate the array of
> >> >> >> + * tracing_map_elts, in order to avoid allocating anything in the map
> >> >> >> + * insertion path. The user-specified map size reflects the maximum
> >> >> >> + * number of elements that can be contained in the table requested by
> >> >> >> + * the user - internally we double that in order to keep the table
> >> >> >> + * sparse and keep collisions manageable.
> >> >> >> + *
> >> >> >> + * A tracing_map is a special-purpose map designed to aggregate or
> >> >> >> + * 'sum' one or more values associated with a specific object of type
> >> >> >> + * tracing_map_elt, which is attached by the map to a given key.
> >> >> >> + *
> >> >> >> + * tracing_map_create() sets up the map itself, and provides
> >> >> >> + * operations for inserting tracing_map_elts, but doesn't allocate the
> >> >> >> + * tracing_map_elts themselves, or provide a means for describing the
> >> >> >> + * keys or sums associated with the tracing_map_elts. All
> >> >> >> + * tracing_map_elts for a given map have the same set of sums and
> >> >> >> + * keys, which are defined by the client using the functions
> >> >> >> + * tracing_map_add_key_field() and tracing_map_add_sum_field(). Once
> >> >> >> + * the fields are defined, the pool of elements allocated for the map
> >> >> >> + * can be created, which occurs when the client code calls
> >> >> >> + * tracing_map_init().
> >> >> >> + *
> >> >> >> + * When tracing_map_init() returns, tracing_map_elt elements can be
> >> >> >> + * inserted into the map using tracing_map_insert(). When called,
> >> >> >> + * tracing_map_insert() grabs a free tracing_map_elt from the pool, or
> >> >> >> + * finds an existing match in the map and in either case returns it.
> >> >> >> + * The client can then use tracing_map_update_sum() and
> >> >> >> + * tracing_map_read_sum() to update or read a given sum field for the
> >> >> >> + * tracing_map_elt.
> >> >> >> + *
> >> >> >> + * The client can at any point retrieve and traverse the current set
> >> >> >> + * of inserted tracing_map_elts in a tracing_map, via
> >> >> >> + * tracing_map_sort_entries(). Sorting can be done on any field,
> >> >> >> + * including keys.
> >> >> >> + *
> >> >> >> + * See tracing_map.h for a description of tracing_map_ops.
> >> >> >> + *
> >> >> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
> >> >> >> + */
> >> >> >> +struct tracing_map *tracing_map_create(unsigned int map_bits,
> >> >> >> + unsigned int key_size,
> >> >> >> + struct tracing_map_ops *ops,
> >> >> >> + void *private_data)
> >> >> >> +{
> >> >> >> + struct tracing_map *map;
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + if (map_bits < TRACING_MAP_BITS_MIN ||
> >> >> >> + map_bits > TRACING_MAP_BITS_MAX)
> >> >> >> + return ERR_PTR(-EINVAL);
> >> >> >> +
> >> >> >> + map = kzalloc(sizeof(*map), GFP_KERNEL);
> >> >> >> + if (!map)
> >> >> >> + return ERR_PTR(-ENOMEM);
> >> >> >> +
> >> >> >> + map->map_bits = map_bits;
> >> >> >> + map->max_elts = (1 << map_bits);
> >> >> >> + atomic_set(&map->next_elt, -1);
> >> >> >> +
> >> >> >> + map->map_size = (1 << (map_bits + 1));
> >> >> >> + map->ops = ops;
> >> >> >> +
> >> >> >> + map->private_data = private_data;
> >> >> >> +
> >> >> >> + map->map = kcalloc(map->map_size, sizeof(struct tracing_map_entry),
> >> >> >> + GFP_KERNEL);
> >> >> >> + if (!map->map)
> >> >> >> + goto free;
> >> >> >> +
> >> >> >> + map->key_size = key_size;
> >> >> >> + for (i = 0; i < TRACING_MAP_KEYS_MAX; i++)
> >> >> >> + map->key_idx[i] = -1;
> >> >> >> + out:
> >> >> >> + return map;
> >> >> >> + free:
> >> >> >> + tracing_map_destroy(map);
> >> >> >> + map = ERR_PTR(-ENOMEM);
> >> >> >> +
> >> >> >> + goto out;
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_init - Allocate and clear a map's tracing_map_elts
> >> >> >> + * @map: The tracing_map to initialize
> >> >> >> + *
> >> >> >> + * Allocates a clears a pool of tracing_map_elts equal to the
> >> >> >> + * user-specified size of 2 ** map_bits (internally maintained as
> >> >> >> + * 'max_elts' in struct tracing_map). Before using, the map fields
> >> >> >> + * should be added to the map with tracing_map_add_sum_field() and
> >> >> >> + * tracing_map_add_key_field(). tracing_map_init() should then be
> >> >> >> + * called to allocate the array of tracing_map_elts, in order to avoid
> >> >> >> + * allocating anything in the map insertion path. The user-specified
> >> >> >> + * map size reflects the max number of elements requested by the user
> >> >> >> + * - internally we double that in order to keep the table sparse and
> >> >> >> + * keep collisions manageable.
> >> >> >> + *
> >> >> >> + * See tracing_map.h for a description of tracing_map_ops.
> >> >> >> + *
> >> >> >> + * Return: the tracing_map pointer if successful, ERR_PTR if not.
> >> >> >> + */
> >> >> >> +int tracing_map_init(struct tracing_map *map)
> >> >> >> +{
> >> >> >> + int err;
> >> >> >> +
> >> >> >> + if (map->n_fields < 2)
> >> >> >> + return -EINVAL; /* need at least 1 key and 1 val */
> >> >> >> +
> >> >> >> + err = tracing_map_alloc_elts(map);
> >> >> >> + if (err)
> >> >> >> + return err;
> >> >> >> +
> >> >> >> + tracing_map_clear(map);
> >> >> >> +
> >> >> >> + return err;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
> >> >> >> + const struct tracing_map_sort_entry **b)
> >> >> >> +{
> >> >> >> + int ret = 0;
> >> >> >> +
> >> >> >> + if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
> >> >> >> + ret = 1;
> >> >> >> +
> >> >> >> + return ret;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
> >> >> >> + const struct tracing_map_sort_entry **b)
> >> >> >> +{
> >> >> >> + const struct tracing_map_elt *elt_a, *elt_b;
> >> >> >> + struct tracing_map_sort_key *sort_key;
> >> >> >> + struct tracing_map_field *field;
> >> >> >> + tracing_map_cmp_fn_t cmp_fn;
> >> >> >> + void *val_a, *val_b;
> >> >> >> + int ret = 0;
> >> >> >> +
> >> >> >> + elt_a = (*a)->elt;
> >> >> >> + elt_b = (*b)->elt;
> >> >> >> +
> >> >> >> + sort_key = &elt_a->map->sort_key;
> >> >> >> +
> >> >> >> + field = &elt_a->fields[sort_key->field_idx];
> >> >> >> + cmp_fn = field->cmp_fn;
> >> >> >> +
> >> >> >> + val_a = &elt_a->fields[sort_key->field_idx].sum;
> >> >> >> + val_b = &elt_b->fields[sort_key->field_idx].sum;
> >> >> >> +
> >> >> >> + ret = cmp_fn(val_a, val_b);
> >> >> >> + if (sort_key->descending)
> >> >> >> + ret = -ret;
> >> >> >> +
> >> >> >> + return ret;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int cmp_entries_key(const struct tracing_map_sort_entry **a,
> >> >> >> + const struct tracing_map_sort_entry **b)
> >> >> >> +{
> >> >> >> + const struct tracing_map_elt *elt_a, *elt_b;
> >> >> >> + struct tracing_map_sort_key *sort_key;
> >> >> >> + struct tracing_map_field *field;
> >> >> >> + tracing_map_cmp_fn_t cmp_fn;
> >> >> >> + void *val_a, *val_b;
> >> >> >> + int ret = 0;
> >> >> >> +
> >> >> >> + elt_a = (*a)->elt;
> >> >> >> + elt_b = (*b)->elt;
> >> >> >> +
> >> >> >> + sort_key = &elt_a->map->sort_key;
> >> >> >> +
> >> >> >> + field = &elt_a->fields[sort_key->field_idx];
> >> >> >> +
> >> >> >> + cmp_fn = field->cmp_fn;
> >> >> >> +
> >> >> >> + val_a = elt_a->key + field->offset;
> >> >> >> + val_b = elt_b->key + field->offset;
> >> >> >> +
> >> >> >> + ret = cmp_fn(val_a, val_b);
> >> >> >> + if (sort_key->descending)
> >> >> >> + ret = -ret;
> >> >> >> +
> >> >> >> + return ret;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void destroy_sort_entry(struct tracing_map_sort_entry *entry)
> >> >> >> +{
> >> >> >> + if (!entry)
> >> >> >> + return;
> >> >> >> +
> >> >> >> + if (entry->elt_copied)
> >> >> >> + tracing_map_elt_free(entry->elt);
> >> >> >> +
> >> >> >> + kfree(entry);
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_destroy_entries - Destroy a tracing_map_sort_entries() array
> >> >> >> + * @entries: The entries to destroy
> >> >> >> + * @n_entries: The number of entries in the array
> >> >> >> + *
> >> >> >> + * Destroy the elements returned by a tracing_map_sort_entries() call.
> >> >> >> + */
> >> >> >> +void tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
> >> >> >> + unsigned int n_entries)
> >> >> >> +{
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + for (i = 0; i < n_entries; i++)
> >> >> >> + destroy_sort_entry(entries[i]);
> >> >> >> +}
> >> >> >> +
> >> >> >> +static struct tracing_map_sort_entry *
> >> >> >> +create_sort_entry(void *key, struct tracing_map_elt *elt)
> >> >> >> +{
> >> >> >> + struct tracing_map_sort_entry *sort_entry;
> >> >> >> +
> >> >> >> + sort_entry = kzalloc(sizeof(*sort_entry), GFP_KERNEL);
> >> >> >> + if (!sort_entry)
> >> >> >> + return NULL;
> >> >> >> +
> >> >> >> + sort_entry->key = key;
> >> >> >> + sort_entry->elt = elt;
> >> >> >> +
> >> >> >> + return sort_entry;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
> >> >> >> +{
> >> >> >> + struct tracing_map_elt *dup_elt;
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + dup_elt = tracing_map_elt_alloc(elt->map);
> >> >> >> + if (!dup_elt)
> >> >> >> + return NULL;
> >> >> >> +
> >> >> >> + if (elt->map->ops && elt->map->ops->elt_copy)
> >> >> >> + elt->map->ops->elt_copy(dup_elt, elt);
> >> >> >> +
> >> >> >> + dup_elt->private_data = elt->private_data;
> >> >> >> + memcpy(dup_elt->key, elt->key, elt->map->key_size);
> >> >> >> +
> >> >> >> + for (i = 0; i < elt->map->n_fields; i++) {
> >> >> >> + atomic64_set(&dup_elt->fields[i].sum,
> >> >> >> + atomic64_read(&elt->fields[i].sum));
> >> >> >> + dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
> >> >> >> + }
> >> >> >> +
> >> >> >> + return dup_elt;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int merge_dup(struct tracing_map_sort_entry **sort_entries,
> >> >> >> + unsigned int target, unsigned int dup)
> >> >> >> +{
> >> >> >> + struct tracing_map_elt *target_elt, *elt;
> >> >> >> + bool first_dup = (target - dup) == 1;
> >> >> >> + int i;
> >> >> >> +
> >> >> >> + if (first_dup) {
> >> >> >> + elt = sort_entries[target]->elt;
> >> >> >> + target_elt = copy_elt(elt);
> >> >> >> + if (!target_elt)
> >> >> >> + return -ENOMEM;
> >> >> >> + sort_entries[target]->elt = target_elt;
> >> >> >> + sort_entries[target]->elt_copied = true;
> >> >> >> + } else
> >> >> >> + target_elt = sort_entries[target]->elt;
> >> >> >> +
> >> >> >> + elt = sort_entries[dup]->elt;
> >> >> >> +
> >> >> >> + for (i = 0; i < elt->map->n_fields; i++)
> >> >> >> + atomic64_add(atomic64_read(&elt->fields[i].sum),
> >> >> >> + &target_elt->fields[i].sum);
> >> >> >> +
> >> >> >> + sort_entries[dup]->dup = true;
> >> >> >> +
> >> >> >> + return 0;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static int merge_dups(struct tracing_map_sort_entry **sort_entries,
> >> >> >> + int n_entries, unsigned int key_size)
> >> >> >> +{
> >> >> >> + unsigned int dups = 0, total_dups = 0;
> >> >> >> + int err, i, j;
> >> >> >> + void *key;
> >> >> >> +
> >> >> >> + if (n_entries < 2)
> >> >> >> + return total_dups;
> >> >> >> +
> >> >> >> + sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
> >> >> >> + (int (*)(const void *, const void *))cmp_entries_dup, NULL);
> >> >> >> +
> >> >> >> + key = sort_entries[0]->key;
> >> >> >> + for (i = 1; i < n_entries; i++) {
> >> >> >> + if (!memcmp(sort_entries[i]->key, key, key_size)) {
> >> >> >> + dups++; total_dups++;
> >> >> >> + err = merge_dup(sort_entries, i - dups, i);
> >> >> >> + if (err)
> >> >> >> + return err;
> >> >> >> + continue;
> >> >> >> + }
> >> >> >> + key = sort_entries[i]->key;
> >> >> >> + dups = 0;
> >> >> >> + }
> >> >> >> +
> >> >> >> + if (!total_dups)
> >> >> >> + return total_dups;
> >> >> >> +
> >> >> >> + for (i = 0, j = 0; i < n_entries; i++) {
> >> >> >> + if (!sort_entries[i]->dup) {
> >> >> >> + sort_entries[j] = sort_entries[i];
> >> >> >> + if (j++ != i)
> >> >> >> + sort_entries[i] = NULL;
> >> >> >> + } else {
> >> >> >> + destroy_sort_entry(sort_entries[i]);
> >> >> >> + sort_entries[i] = NULL;
> >> >> >> + }
> >> >> >> + }
> >> >> >> +
> >> >> >> + return total_dups;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static bool is_key(struct tracing_map *map, unsigned int field_idx)
> >> >> >> +{
> >> >> >> + unsigned int i;
> >> >> >> +
> >> >> >> + for (i = 0; i < map->n_keys; i++)
> >> >> >> + if (map->key_idx[i] == field_idx)
> >> >> >> + return true;
> >> >> >> + return false;
> >> >> >> +}
> >> >> >> +
> >> >> >> +static void sort_secondary(struct tracing_map *map,
> >> >> >> + const struct tracing_map_sort_entry **entries,
> >> >> >> + unsigned int n_entries,
> >> >> >> + struct tracing_map_sort_key *primary_key,
> >> >> >> + struct tracing_map_sort_key *secondary_key)
> >> >> >> +{
> >> >> >> + int (*primary_fn)(const struct tracing_map_sort_entry **,
> >> >> >> + const struct tracing_map_sort_entry **);
> >> >> >> + int (*secondary_fn)(const struct tracing_map_sort_entry **,
> >> >> >> + const struct tracing_map_sort_entry **);
> >> >> >> + unsigned i, start = 0, n_sub = 1;
> >> >> >> +
> >> >> >> + if (is_key(map, primary_key->field_idx))
> >> >> >> + primary_fn = cmp_entries_key;
> >> >> >> + else
> >> >> >> + primary_fn = cmp_entries_sum;
> >> >> >> +
> >> >> >> + if (is_key(map, secondary_key->field_idx))
> >> >> >> + secondary_fn = cmp_entries_key;
> >> >> >> + else
> >> >> >> + secondary_fn = cmp_entries_sum;
> >> >> >> +
> >> >> >> + for (i = 0; i < n_entries - 1; i++) {
> >> >> >> + const struct tracing_map_sort_entry **a = &entries[i];
> >> >> >> + const struct tracing_map_sort_entry **b = &entries[i + 1];
> >> >> >> +
> >> >> >> + if (primary_fn(a, b) == 0) {
> >> >> >> + n_sub++;
> >> >> >> + if (i < n_entries - 2)
> >> >> >> + continue;
> >> >> >> + }
> >> >> >> +
> >> >> >> + if (n_sub < 2) {
> >> >> >> + start = i + 1;
> >> >> >> + n_sub = 1;
> >> >> >> + continue;
> >> >> >> + }
> >> >> >> +
> >> >> >> + set_sort_key(map, secondary_key);
> >> >> >> + sort(&entries[start], n_sub,
> >> >> >> + sizeof(struct tracing_map_sort_entry *),
> >> >> >> + (int (*)(const void *, const void *))secondary_fn, NULL);
> >> >> >> + set_sort_key(map, primary_key);
> >> >> >> +
> >> >> >> + start = i + 1;
> >> >> >> + n_sub = 1;
> >> >> >> + }
> >> >> >> +}
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * tracing_map_sort_entries - Sort the current set of tracing_map_elts in a map
> >> >> >> + * @map: The tracing_map
> >> >> >> + * @sort_key: The sort key to use for sorting
> >> >> >> + * @sort_entries: outval: pointer to allocated and sorted array of entries
> >> >> >> + *
> >> >> >> + * tracing_map_sort_entries() sorts the current set of entries in the
> >> >> >> + * map and returns the list of tracing_map_sort_entries containing
> >> >> >> + * them to the client in the sort_entries param. The client can
> >> >> >> + * access the struct tracing_map_elt element of interest directly as
> >> >> >> + * the 'elt' field of a returned struct tracing_map_sort_entry object.
> >> >> >> + *
> >> >> >> + * The sort_key has only two fields: idx and descending. 'idx' refers
> >> >> >> + * to the index of the field added via tracing_map_add_sum_field() or
> >> >> >> + * tracing_map_add_key_field() when the tracing_map was initialized.
> >> >> >> + * 'descending' is a flag that if set reverses the sort order, which
> >> >> >> + * by default is ascending.
> >> >> >> + *
> >> >> >> + * The client should not hold on to the returned array but should use
> >> >> >> + * it and call tracing_map_destroy_sort_entries() when done.
> >> >> >> + *
> >> >> >> + * Return: the number of sort_entries in the struct tracing_map_sort_entry
> >> >> >> + * array, negative on error
> >> >> >> + */
> >> >> >> +int tracing_map_sort_entries(struct tracing_map *map,
> >> >> >> + struct tracing_map_sort_key *sort_keys,
> >> >> >> + unsigned int n_sort_keys,
> >> >> >> + struct tracing_map_sort_entry ***sort_entries)
> >> >> >> +{
> >> >> >> + int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
> >> >> >> + const struct tracing_map_sort_entry **);
> >> >> >> + struct tracing_map_sort_entry *sort_entry, **entries;
> >> >> >> + int i, n_entries, ret;
> >> >> >> +
> >> >> >> + entries = kcalloc(map->max_elts, sizeof(sort_entry), GFP_KERNEL);
> >> >> >> + if (!entries)
> >> >> >> + return -ENOMEM;
> >> >> >> +
> >> >> >> + for (i = 0, n_entries = 0; i < map->map_size; i++) {
> >> >> >> + if (!map->map[i].key || !map->map[i].val)
> >> >> >> + continue;
> >> >> >> +
> >> >> >> + entries[n_entries] = create_sort_entry(map->map[i].val->key,
> >> >> >> + map->map[i].val);
> >> >> >> + if (!entries[n_entries++]) {
> >> >> >> + ret = -ENOMEM;
> >> >> >> + goto free;
> >> >> >> + }
> >> >> >> + }
> >> >> >> +
> >> >> >> + if (n_entries == 0) {
> >> >> >> + ret = 0;
> >> >> >> + goto free;
> >> >> >> + }
> >> >> >> +
> >> >> >> + if (n_entries == 1) {
> >> >> >> + *sort_entries = entries;
> >> >> >> + return 1;
> >> >> >> + }
> >> >> >> +
> >> >> >> + ret = merge_dups(entries, n_entries, map->key_size);
> >> >> >> + if (ret < 0)
> >> >> >> + goto free;
> >> >> >> + n_entries -= ret;
> >> >> >> +
> >> >> >> + if (is_key(map, sort_keys[0].field_idx))
> >> >> >> + cmp_entries_fn = cmp_entries_key;
> >> >> >> + else
> >> >> >> + cmp_entries_fn = cmp_entries_sum;
> >> >> >> +
> >> >> >> + set_sort_key(map, &sort_keys[0]);
> >> >> >> +
> >> >> >> + sort(entries, n_entries, sizeof(struct tracing_map_sort_entry *),
> >> >> >> + (int (*)(const void *, const void *))cmp_entries_fn, NULL);
> >> >> >> +
> >> >> >> + if (n_sort_keys > 1)
> >> >> >> + sort_secondary(map,
> >> >> >> + (const struct tracing_map_sort_entry **)entries,
> >> >> >> + n_entries,
> >> >> >> + &sort_keys[0],
> >> >> >> + &sort_keys[1]);
> >> >> >> +
> >> >> >> + *sort_entries = entries;
> >> >> >> +
> >> >> >> + return n_entries;
> >> >> >> + free:
> >> >> >> + tracing_map_destroy_sort_entries(entries, n_entries);
> >> >> >> +
> >> >> >> + return ret;
> >> >> >> +}
> >> >> >> diff --git a/kernel/trace/tracing_map.h b/kernel/trace/tracing_map.h
> >> >> >> new file mode 100644
> >> >> >> index 0000000..2e63c5c
> >> >> >> --- /dev/null
> >> >> >> +++ b/kernel/trace/tracing_map.h
> >> >> >> @@ -0,0 +1,258 @@
> >> >> >> +#ifndef __TRACING_MAP_H
> >> >> >> +#define __TRACING_MAP_H
> >> >> >> +
> >> >> >> +#define TRACING_MAP_BITS_DEFAULT 11
> >> >> >> +#define TRACING_MAP_BITS_MAX 17
> >> >> >> +#define TRACING_MAP_BITS_MIN 7
> >> >> >> +
> >> >> >> +#define TRACING_MAP_FIELDS_MAX 4
> >> >> >> +#define TRACING_MAP_KEYS_MAX 2
> >> >> >> +
> >> >> >> +#define TRACING_MAP_SORT_KEYS_MAX 2
> >> >> >> +
> >> >> >> +typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
> >> >> >> +
> >> >> >> +/*
> >> >> >> + * This is an overview of the tracing_map data structures and how they
> >> >> >> + * relate to the tracing_map API. The details of the algorithms
> >> >> >> + * aren't discussed here - this is just a general overview of the data
> >> >> >> + * structures and how they interact with the API.
> >> >> >> + *
> >> >> >> + * The central data structure of the tracing_map is an initially
> >> >> >> + * zeroed array of struct tracing_map_entry (stored in the map field
> >> >> >> + * of struct tracing_map). tracing_map_entry is a very simple data
> >> >> >> + * structure containing only two fields: a 32-bit unsigned 'key'
> >> >> >> + * variable and a pointer named 'val'. This array of struct
> >> >> >> + * tracing_map_entry is essentially a hash table which will be
> >> >> >> + * modified by a single function, tracing_map_insert(), but which can
> >> >> >> + * be traversed and read by a user at any time (though the user does
> >> >> >> + * this indirectly via an array of tracing_map_sort_entry - see the
> >> >> >> + * explanation of that data structure in the discussion of the
> >> >> >> + * sorting-related data structures below).
> >> >> >> + *
> >> >> >> + * The central function of the tracing_map API is
> >> >> >> + * tracing_map_insert(). tracing_map_insert() hashes the
> >> >> >> + * arbitrarily-sized key passed into it into a 32-bit unsigned key.
> >> >> >> + * It then uses this key, truncated to the array size, as an index
> >> >> >> + * into the array of tracing_map_entries. If the value of the 'key'
> >> >> >> + * field of the tracing_map_entry found at that location is 0, then
> >> >> >> + * that entry is considered to be free and can be claimed, by
> >> >> >> + * replacing the 0 in the 'key' field of the tracing_map_entry with
> >> >> >> + * the new 32-bit hashed key. Once claimed, that tracing_map_entry's
> >> >> >> + * 'val' field is then used to store a unique element which will be
> >> >> >> + * forever associated with that 32-bit hashed key in the
> >> >> >> + * tracing_map_entry.
> >> >> >> + *
> >> >> >> + * That unique element now in the tracing_map_entry's 'val' field is
> >> >> >> + * an instance of tracing_map_elt, where 'elt' in the latter part of
> >> >> >> + * that variable name is short for 'element'. The purpose of a
> >> >> >> + * tracing_map_elt is to hold values specific to the the particular
> >> >> >> + * 32-bit hashed key it's assocated with. Things such as the unique
> >> >> >> + * set of aggregated sums associated with the 32-bit hashed key, along
> >> >> >> + * with a copy of the full key associated with the entry, and which
> >> >> >> + * was used to produce the 32-bit hashed key.
> >> >> >> + *
> >> >> >> + * When tracing_map_create() is called to create the tracing map, the
> >> >> >> + * user specifies (indirectly via the map_bits param, the details are
> >> >> >> + * unimportant for this discussion) the maximum number of elements
> >> >> >> + * that the map can hold (stored in the max_elts field of struct
> >> >> >> + * tracing_map). This is the maximum possible number of
> >> >> >> + * tracing_map_entries in the tracing_map_entry array which can be
> >> >> >> + * 'claimed' as described in the above discussion, and therefore is
> >> >> >> + * also the maximum number of tracing_map_elts that can be associated
> >> >> >> + * with the tracing_map_entry array in the tracing_map. Because of
> >> >> >> + * the way the insertion algorithm works, the size of the allocated
> >> >> >> + * tracing_map_entry array is always twice the maximum number of
> >> >> >> + * elements (2 * max_elts). This value is stored in the map_size
> >> >> >> + * field of struct tracing_map.
> >> >> >> + *
> >> >> >> + * Because tracing_map_insert() needs to work from any context,
> >> >> >> + * including from within the memory allocation functions themselves,
> >> >> >> + * both the tracing_map_entry array and a pool of max_elts
> >> >> >> + * tracing_map_elts are pre-allocated before any call is made to
> >> >> >> + * tracing_map_insert().
> >> >> >> + *
> >> >> >> + * The tracing_map_entry array is allocated as a single block by
> >> >> >> + * tracing_map_create().
> >> >> >> + *
> >> >> >> + * Because the tracing_map_elts are much larger objects and can't
> >> >> >> + * generally be allocated together as a single large array without
> >> >> >> + * failure, they're allocated individually, by tracing_map_init().
> >> >> >> + *
> >> >> >> + * The pool of tracing_map_elts are allocated by tracing_map_init()
> >> >> >> + * rather than by tracing_map_create() because at the time
> >> >> >> + * tracing_map_create() is called, there isn't enough information to
> >> >> >> + * create the tracing_map_elts. Specifically,the user first needs to
> >> >> >> + * tell the tracing_map implementation how many fields the
> >> >> >> + * tracing_map_elts contain, and which types of fields they are (key
> >> >> >> + * or sum). The user does this via the tracing_map_add_sum_field()
> >> >> >> + * and tracing_map_add_key_field() functions, following which the user
> >> >> >> + * calls tracing_map_init() to finish up the tracing map setup. The
> >> >> >> + * array holding the pointers which make up the pre-allocated pool of
> >> >> >> + * tracing_map_elts is allocated as a single block and is stored in
> >> >> >> + * the elts field of struct tracing_map.
> >> >> >> + *
> >> >> >> + * There is also a set of structures used for sorting that might
> >> >> >> + * benefit from some minimal explanation.
> >> >> >> + *
> >> >> >> + * struct tracing_map_sort_key is used to drive the sort at any given
> >> >> >> + * time. By 'any given time' we mean that a different
> >> >> >> + * tracing_map_sort_key will be used at different times depending on
> >> >> >> + * whether the sort currently being performed is a primary or a
> >> >> >> + * secondary sort.
> >> >> >> + *
> >> >> >> + * The sort key is very simple, consisting of the field index of the
> >> >> >> + * tracing_map_elt field to sort on (which the user saved when adding
> >> >> >> + * the field), and whether the sort should be done in an ascending or
> >> >> >> + * descending order.
> >> >> >> + *
> >> >> >> + * For the convenience of the sorting code, a tracing_map_sort_entry
> >> >> >> + * is created for each tracing_map_elt, again individually allocated
> >> >> >> + * to avoid failures that might be expected if allocated as a single
> >> >> >> + * large array of struct tracing_map_sort_entry.
> >> >> >> + * tracing_map_sort_entry instances are the objects expected by the
> >> >> >> + * various internal sorting functions, and are also what the user
> >> >> >> + * ultimately receives after calling tracing_map_sort_entries().
> >> >> >> + * Because it doesn't make sense for users to access an unordered and
> >> >> >> + * sparsely populated tracing_map directly, the
> >> >> >> + * tracing_map_sort_entries() function is provided so that users can
> >> >> >> + * retrieve a sorted list of all existing elements. In addition to
> >> >> >> + * the associated tracing_map_elt 'elt' field contained within the
> >> >> >> + * tracing_map_sort_entry, which is the object of interest to the
> >> >> >> + * user, tracing_map_sort_entry objects contain a number of additional
> >> >> >> + * fields which are used for caching and internal purposes and can
> >> >> >> + * safely be ignored.
> >> >> >> +*/
> >> >> >> +
> >> >> >> +struct tracing_map_field {
> >> >> >> + tracing_map_cmp_fn_t cmp_fn;
> >> >> >> + union {
> >> >> >> + atomic64_t sum;
> >> >> >> + unsigned int offset;
> >> >> >> + };
> >> >> >> +};
> >> >> >> +
> >> >> >> +struct tracing_map_elt {
> >> >> >> + struct tracing_map *map;
> >> >> >> + struct tracing_map_field *fields;
> >> >> >> + void *key;
> >> >> >> + void *private_data;
> >> >> >> +};
> >> >> >> +
> >> >> >> +struct tracing_map_entry {
> >> >> >> + u32 key;
> >> >> >> + struct tracing_map_elt *val;
> >> >> >> +};
> >> >> >> +
> >> >> >> +struct tracing_map_sort_key {
> >> >> >> + unsigned int field_idx;
> >> >> >> + bool descending;
> >> >> >> +};
> >> >> >> +
> >> >> >> +struct tracing_map_sort_entry {
> >> >> >> + void *key;
> >> >> >> + struct tracing_map_elt *elt;
> >> >> >> + bool elt_copied;
> >> >> >> + bool dup;
> >> >> >> +};
> >> >> >> +
> >> >> >> +struct tracing_map {
> >> >> >> + unsigned int key_size;
> >> >> >> + unsigned int map_bits;
> >> >> >> + unsigned int map_size;
> >> >> >> + unsigned int max_elts;
> >> >> >> + atomic_t next_elt;
> >> >> >> + struct tracing_map_elt **elts;
> >> >> >> + struct tracing_map_entry *map;
> >> >> >> + struct tracing_map_ops *ops;
> >> >> >> + void *private_data;
> >> >> >> + struct tracing_map_field fields[TRACING_MAP_FIELDS_MAX];
> >> >> >> + unsigned int n_fields;
> >> >> >> + int key_idx[TRACING_MAP_KEYS_MAX];
> >> >> >> + unsigned int n_keys;
> >> >> >> + struct tracing_map_sort_key sort_key;
> >> >> >> +};
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * struct tracing_map_ops - callbacks for tracing_map
> >> >> >> + *
> >> >> >> + * The methods in this structure define callback functions for various
> >> >> >> + * operations on a tracing_map or objects related to a tracing_map.
> >> >> >> + *
> >> >> >> + * For a detailed description of tracing_map_elt objects please see
> >> >> >> + * the overview of tracing_map data structures at the beginning of
> >> >> >> + * this file.
> >> >> >> + *
> >> >> >> + * All the methods below are optional.
> >> >> >> + *
> >> >> >> + * @elt_alloc: When a tracing_map_elt is allocated, this function, if
> >> >> >> + * defined, will be called and gives clients the opportunity to
> >> >> >> + * allocate additional data and attach it to the element
> >> >> >> + * (tracing_map_elt->private_data is meant for that purpose).
> >> >> >> + * Element allocation occurs before tracing begins, when the
> >> >> >> + * tracing_map_init() call is made by client code.
> >> >> >> + *
> >> >> >> + * @elt_copy: At certain points in the lifetime of an element, it may
> >> >> >> + * need to be copied. The copy should include a copy of the
> >> >> >> + * client-allocated data, which can be copied into the 'to'
> >> >> >> + * element from the 'from' element.
> >> >> >> + *
> >> >> >> + * @elt_free: When a tracing_map_elt is freed, this function is called
> >> >> >> + * and allows client-allocated per-element data to be freed.
> >> >> >> + *
> >> >> >> + * @elt_clear: This callback allows per-element client-defined data to
> >> >> >> + * be cleared, if applicable.
> >> >> >> + *
> >> >> >> + * @elt_init: This callback allows per-element client-defined data to
> >> >> >> + * be initialized when used i.e. when the element is actually
> >> >> >> + * claimed by tracing_map_insert() in the context of the map
> >> >> >> + * insertion.
> >> >> >> + */
> >> >> >> +struct tracing_map_ops {
> >> >> >> + int (*elt_alloc)(struct tracing_map_elt *elt);
> >> >> >> + void (*elt_copy)(struct tracing_map_elt *to,
> >> >> >> + struct tracing_map_elt *from);
> >> >> >> + void (*elt_free)(struct tracing_map_elt *elt);
> >> >> >> + void (*elt_clear)(struct tracing_map_elt *elt);
> >> >> >> + void (*elt_init)(struct tracing_map_elt *elt);
> >> >> >> +};
> >> >> >> +
> >> >> >> +extern struct tracing_map *tracing_map_create(unsigned int map_bits,
> >> >> >> + unsigned int key_size,
> >> >> >> + struct tracing_map_ops *ops,
> >> >> >> + void *private_data);
> >> >> >> +extern int tracing_map_init(struct tracing_map *map);
> >> >> >> +
> >> >> >> +extern int tracing_map_add_sum_field(struct tracing_map *map);
> >> >> >> +extern int tracing_map_add_key_field(struct tracing_map *map,
> >> >> >> + unsigned int offset,
> >> >> >> + tracing_map_cmp_fn_t cmp_fn);
> >> >> >> +
> >> >> >> +extern void tracing_map_destroy(struct tracing_map *map);
> >> >> >> +extern void tracing_map_clear(struct tracing_map *map);
> >> >> >> +
> >> >> >> +extern struct tracing_map_elt *
> >> >> >> +tracing_map_insert(struct tracing_map *map, void *key);
> >> >> >> +
> >> >> >> +extern tracing_map_cmp_fn_t tracing_map_cmp_num(int field_size,
> >> >> >> + int field_is_signed);
> >> >> >> +extern int tracing_map_cmp_string(void *val_a, void *val_b);
> >> >> >> +extern int tracing_map_cmp_none(void *val_a, void *val_b);
> >> >> >> +
> >> >> >> +extern void tracing_map_update_sum(struct tracing_map_elt *elt,
> >> >> >> + unsigned int i, u64 n);
> >> >> >> +extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
> >> >> >> +extern void tracing_map_set_field_descr(struct tracing_map *map,
> >> >> >> + unsigned int i,
> >> >> >> + unsigned int key_offset,
> >> >> >> + tracing_map_cmp_fn_t cmp_fn);
> >> >> >> +extern int
> >> >> >> +tracing_map_sort_entries(struct tracing_map *map,
> >> >> >> + struct tracing_map_sort_key *sort_keys,
> >> >> >> + unsigned int n_sort_keys,
> >> >> >> + struct tracing_map_sort_entry ***sort_entries);
> >> >> >> +
> >> >> >> +extern void
> >> >> >> +tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
> >> >> >> + unsigned int n_entries);
> >> >> >> +#endif /* __TRACING_MAP_H */
> >> >> >> --
> >> >> >> 1.9.3
> >> >> >>
> >> >> >> --
> >> >> >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> >> >> >> the body of a message to [email protected]
> >> >> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >> >> >> Please read the FAQ at http://www.tux.org/lkml/
> >> >> >
> >> >> > --
> >> >> > Mathieu Desnoyers
> >> >> > EfficiOS Inc.
> >> >> > http://www.efficios.com
>

2015-07-19 13:24:54

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v9 14/22] tracing: Add hist trigger 'hex' modifier for displaying numeric fields

Hi Tom,

On Thu, Jul 16, 2015 at 12:22:47PM -0500, Tom Zanussi wrote:
> Allow users to have numeric fields displayed as hex values in the
> output by appending '.hex' to field names:
>
> # echo hist:keys=aaa,bbb.hex:vals=ccc.hex ... \
> [ if filter] > event/trigger
>
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> kernel/trace/trace.c | 5 +++-
> kernel/trace/trace_events_hist.c | 49 +++++++++++++++++++++++++++++++++++++---
> 2 files changed, 50 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 27daa28..14f9472 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -3810,7 +3810,10 @@ static const char readme_msg[] =
> "\t entry is a simple list of the keys and values comprising the\n"
> "\t entry; keys are printed first and are delineated by curly\n"
> "\t braces, and are followed by the set of value fields for the\n"
> - "\t entry. Numeric fields are displayed as base-10 integers.\n"
> + "\t entry. By default, numeric fields are displayed as base-10\n"
> + "\t integers. This can be modified by appending any of the\n"
> + "\t following modifiers to the field name:\n\n"
> + "\t .hex display a number as a hex value\n\n"
> "\t By default, the size of the hash table is 2048 entries. The\n"
> "\t 'size' param can be used to specify more or fewer than that.\n"
> "\t The units are in terms of hashtable entries - if a run uses\n"
> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> index d8259fe..9cc38ee 100644
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -72,6 +72,7 @@ enum hist_field_flags {
> HIST_FIELD_HITCOUNT = 1,
> HIST_FIELD_KEY = 2,
> HIST_FIELD_STRING = 4,
> + HIST_FIELD_HEX = 8,
> };
>
> struct hist_trigger_attrs {
> @@ -284,9 +285,20 @@ static int create_val_field(struct hist_trigger_data *hist_data,
> {
> struct ftrace_event_field *field = NULL;
> unsigned long flags = 0;
> + char *field_name;
> int ret = 0;
>
> - field = trace_find_event_field(file->event_call, field_str);
> + field_name = strsep(&field_str, ".");
> + if (field_str) {
> + if (!strcmp(field_str, "hex"))
> + flags |= HIST_FIELD_HEX;
> + else {
> + ret = -EINVAL;
> + goto out;
> + }
> + }
> +
> + field = trace_find_event_field(file->event_call, field_name);
> if (!field) {
> ret = -EINVAL;
> goto out;
> @@ -349,11 +361,22 @@ static int create_key_field(struct hist_trigger_data *hist_data,
> struct ftrace_event_field *field = NULL;
> unsigned long flags = 0;
> unsigned int key_size;
> + char *field_name;
> int ret = 0;
>
> flags |= HIST_FIELD_KEY;
>
> - field = trace_find_event_field(file->event_call, field_str);
> + field_name = strsep(&field_str, ".");
> + if (field_str) {
> + if (!strcmp(field_str, "hex"))
> + flags |= HIST_FIELD_HEX;
> + else {
> + ret = -EINVAL;
> + goto out;
> + }
> + }
> +
> + field = trace_find_event_field(file->event_call, field_name);
> if (!field) {
> ret = -EINVAL;
> goto out;
> @@ -688,7 +711,11 @@ hist_trigger_entry_print(struct seq_file *m,
> if (i > hist_data->n_vals)
> seq_puts(m, ", ");
>
> - if (key_field->flags & HIST_FIELD_STRING) {
> + if (key_field->flags & HIST_FIELD_HEX) {
> + uval = *(u64 *)(key + key_field->offset);
> + seq_printf(m, "%s: %llx",
> + key_field->field->name, uval);
> + } else if (key_field->flags & HIST_FIELD_STRING) {
> seq_printf(m, "%s: %-35s", key_field->field->name,
> (char *)(key + key_field->offset));
> } else {

It seems the '.hex' modifier only affects key fields' output..

Thanks,
Namhyung


> @@ -791,9 +818,25 @@ const struct file_operations event_hist_fops = {
> .release = single_release,
> };
>
> +static const char *get_hist_field_flags(struct hist_field *hist_field)
> +{
> + const char *flags_str = NULL;
> +
> + if (hist_field->flags & HIST_FIELD_HEX)
> + flags_str = "hex";
> +
> + return flags_str;
> +}
> +
> static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
> {
> seq_printf(m, "%s", hist_field->field->name);
> + if (hist_field->flags) {
> + const char *flags_str = get_hist_field_flags(hist_field);
> +
> + if (flags_str)
> + seq_printf(m, ".%s", flags_str);
> + }
> }
>
> static int event_hist_trigger_print(struct seq_file *m,
> --
> 1.9.3
>

2015-07-19 13:34:13

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v9 20/22] tracing: Remove restriction on string position in hist trigger keys

On Thu, Jul 16, 2015 at 12:22:53PM -0500, Tom Zanussi wrote:
> If we assume the maximum size for a string field, we don't have to
> worry about its position. Since we only allow two keys in a compound
> key and having more than one string key in a given compound key
> doesn't make much sense anyway, trading a bit of extra space instead
> of introducing an arbitrary restriction makes more sense.
>
> We also need to use the event field size for static strings when
> copying the contents, otherwise we get random garbage in the key.
>
> Finally, rearrange the code without changing any functionality by
> moving the compound key updating code into a separate function.
>
> Signed-off-by: Tom Zanussi <[email protected]>

Looks good to me. Just a nitpick below..


> ---
> kernel/trace/trace_events_hist.c | 65 +++++++++++++++++++++++-----------------
> 1 file changed, 37 insertions(+), 28 deletions(-)
>
> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> index 67fffee..4ba7645 100644
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -508,8 +508,8 @@ static int create_key_field(struct hist_trigger_data *hist_data,
> goto out;
> }
>
> - if (is_string_field(field)) /* should be last key field */
> - key_size = HIST_KEY_SIZE_MAX - key_offset;
> + if (is_string_field(field))
> + key_size = MAX_FILTER_STR_VAL;
> else
> key_size = field->size;
> }
> @@ -781,9 +781,36 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
> }
> }
>
> +static inline void add_to_key(char *compound_key, void *key,
> + struct hist_field *key_field, void *rec)
> +{
> + size_t size = key_field->size;
> +
> + if (key_field->flags & HIST_FIELD_STRING) {
> + struct ftrace_event_field *field;
> +
> + /* ensure NULL-termination */
> + size--;

This is unnecessary since the size value will be updated below anyway.
I think it's enough just to move the comment to ...


> +
> + field = key_field->field;
> + if (field->filter_type == FILTER_DYN_STRING)
> + size = *(u32 *)(rec + field->offset) >> 16;
> + else if (field->filter_type == FILTER_PTR_STRING)
> + size = strlen(key);
> + else if (field->filter_type == FILTER_STATIC_STRING)
> + size = field->size;
> +

... here. :)

> + if (size > key_field->size - 1)
> + size = key_field->size - 1;
> + }
> +
> + memcpy(compound_key + key_field->offset, key, size);
> +}
> +
> static void event_hist_trigger(struct event_trigger_data *data, void *rec)
> {
> struct hist_trigger_data *hist_data = data->private_data;
> + bool use_compound_key = (hist_data->n_keys > 1);
> unsigned long entries[HIST_STACKTRACE_DEPTH];
> char compound_key[HIST_KEY_SIZE_MAX];
> struct stack_trace stacktrace;
> @@ -798,8 +825,7 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
> return;
> }
>
> - if (hist_data->n_keys > 1)
> - memset(compound_key, 0, hist_data->key_size);
> + memset(compound_key, 0, hist_data->key_size);
>
> for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
> key_field = hist_data->fields[i];
> @@ -816,35 +842,18 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
> key = entries;
> } else {
> field_contents = key_field->fn(key_field, rec);
> - if (key_field->flags & HIST_FIELD_STRING)
> + if (key_field->flags & HIST_FIELD_STRING) {
> key = (void *)field_contents;
> - else
> + use_compound_key = true;
> + } else
> key = (void *)&field_contents;
> -
> - if (hist_data->n_keys > 1) {
> - /* ensure NULL-termination */
> - size_t size = key_field->size - 1;
> -
> - if (key_field->flags & HIST_FIELD_STRING) {
> - struct ftrace_event_field *field;
> -
> - field = key_field->field;
> - if (field->filter_type == FILTER_DYN_STRING)
> - size = *(u32 *)(rec + field->offset) >> 16;
> - else if (field->filter_type == FILTER_PTR_STRING)
> - size = strlen(key);
> -
> - if (size > key_field->size - 1)
> - size = key_field->size - 1;
> - }
> -
> - memcpy(compound_key + key_field->offset, key,
> - size);
> - }
> }
> +
> + if (use_compound_key)
> + add_to_key(compound_key, key, key_field, rec);
> }
>
> - if (hist_data->n_keys > 1)
> + if (use_compound_key)
> key = compound_key;
>
> elt = tracing_map_insert(hist_data->map, key);
> --
> 1.9.3
>

Subject: Re: [PATCH v9 08/22] tracing: Add 'hist' event trigger command

Hi Tom,

On 2015/07/17 2:22, Tom Zanussi wrote:

> @@ -3782,6 +3785,32 @@ static const char readme_msg[] =
> "\t To remove a trigger with a count:\n"
> "\t echo '!<trigger>:0 > <system>/<event>/trigger\n"
> "\t Filters can be ignored when removing a trigger.\n"
> +#ifdef CONFIG_HIST_TRIGGERS
> + " hist trigger\t- If set, event hits are aggregated into a hash table\n"
> + "\t Format: hist:keys=<field1>\n"
> + "\t [:size=#entries]\n"
> + "\t [if <filter>]\n\n"
> + "\t When a matching event is hit, an entry is added to a hash\n"
> + "\t table using the key named. Keys correspond to fields in the\n"
> + "\t event's format description. On an event hit, the value of a\n"
> + "\t sum called 'hitcount' is incremented, which is simply a count\n"
> + "\t of event hits. Keys can be any field.\n\n"
> + "\t Reading the 'hist' file for the event will dump the hash\n"
> + "\t table in its entirety to stdout. Each printed hash table\n"
> + "\t entry is a simple list of the keys and values comprising the\n"
> + "\t entry; keys are printed first and are delineated by curly\n"
> + "\t braces, and are followed by the set of value fields for the\n"
> + "\t entry. Numeric fields are displayed as base-10 integers.\n"
> + "\t By default, the size of the hash table is 2048 entries. The\n"
> + "\t 'size' param can be used to specify more or fewer than that.\n"
> + "\t The units are in terms of hashtable entries - if a run uses\n"
> + "\t more entries than specified, the results will show the number\n"
> + "\t of 'drops', the number of hits that were ignored. The size\n"
> + "\t should be a power of 2 between 128 and 131072 (any non-\n"
> + "\t power-of-2 number specified will be rounded up).\n\n"
> + "\t The entries are sorted by 'hitcount' and the sort order is\n"
> + "\t 'ascending'.\n\n"


Hmm, this seems too much about implementation of histogram. Could you shorten this
to be a half ?


Thank you,


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

Subject: Re: [PATCH v9 21/22] tracing: Add enable_hist/disable_hist triggers

On 2015/07/17 2:22, Tom Zanussi wrote:
> Similar to enable_event/disable_event triggers, these triggers enable
> and disable the aggregation of events into maps rather than enabling
> and disabling their writing into the trace buffer.
>
> They can be used to automatically start and stop hist triggers based
> on a matching filter condition.
>
> If there's a paused hist trigger on system:event, the following would
> start it when the filter condition was hit:
>
> # echo enable_hist:system:event [ if filter] > event/trigger
>
> And the following would disable a running system:event hist trigger:
>
> # echo disable_hist:system:event [ if filter] > event/trigger
>
> See Documentation/trace/events.txt for real examples.

Hmm, do we really need this? Since we've already had multiple instances,
if someone wants to make histogram separated from event logger, he/she can
make another instance for that, and disable/enable event itself.

I'm considering if we accept this methods, we'll need to accept another
enable/disable triggers for each action too in the future.

Thank you,

>
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> include/linux/trace_events.h | 1 +
> kernel/trace/trace.c | 11 ++++
> kernel/trace/trace.h | 32 ++++++++++
> kernel/trace/trace_events_hist.c | 115 ++++++++++++++++++++++++++++++++++++
> kernel/trace/trace_events_trigger.c | 71 ++++++++++++----------
> 5 files changed, 199 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> index 0faf48b..0f3ffdd 100644
> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -411,6 +411,7 @@ enum event_trigger_type {
> ETT_STACKTRACE = (1 << 2),
> ETT_EVENT_ENABLE = (1 << 3),
> ETT_EVENT_HIST = (1 << 4),
> + ETT_HIST_ENABLE = (1 << 5),
> };
>
> extern int filter_match_preds(struct event_filter *filter, void *rec);
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 16c64a2..c581750 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -3761,6 +3761,10 @@ static const char readme_msg[] =
> "\t trigger: traceon, traceoff\n"
> "\t enable_event:<system>:<event>\n"
> "\t disable_event:<system>:<event>\n"
> +#ifdef CONFIG_HIST_TRIGGERS
> + "\t enable_hist:<system>:<event>\n"
> + "\t disable_hist:<system>:<event>\n"
> +#endif
> #ifdef CONFIG_STACKTRACE
> "\t\t stacktrace\n"
> #endif
> @@ -3836,6 +3840,13 @@ static const char readme_msg[] =
> "\t restart a paused hist trigger.\n\n"
> "\t The 'clear' param will clear the contents of a running hist\n"
> "\t trigger and leave its current paused/active state.\n\n"
> + "\t The enable_hist and disable_hist triggers can be used to\n"
> + "\t have one event conditionally start and stop another event's\n"
> + "\t already-attached hist trigger. Any number of enable_hist\n"
> + "\t and disable_hist triggers can be attached to a given event,\n"
> + "\t allowing that event to kick off and stop aggregations on\n"
> + "\t a host of other events. See Documentation/trace/events.txt\n"
> + "\t for examples.\n"
> #endif
> ;
>
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index e6cb781..5e2e3b0 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -1102,8 +1102,10 @@ extern const struct file_operations event_hist_fops;
>
> #ifdef CONFIG_HIST_TRIGGERS
> extern int register_trigger_hist_cmd(void);
> +extern int register_trigger_hist_enable_disable_cmds(void);
> #else
> static inline int register_trigger_hist_cmd(void) { return 0; }
> +static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
> #endif
>
> extern int register_trigger_cmds(void);
> @@ -1121,6 +1123,34 @@ struct event_trigger_data {
> struct list_head list;
> };
>
> +/* Avoid typos */
> +#define ENABLE_EVENT_STR "enable_event"
> +#define DISABLE_EVENT_STR "disable_event"
> +#define ENABLE_HIST_STR "enable_hist"
> +#define DISABLE_HIST_STR "disable_hist"
> +
> +struct enable_trigger_data {
> + struct trace_event_file *file;
> + bool enable;
> + bool hist;
> +};
> +
> +extern int event_enable_trigger_print(struct seq_file *m,
> + struct event_trigger_ops *ops,
> + struct event_trigger_data *data);
> +extern void event_enable_trigger_free(struct event_trigger_ops *ops,
> + struct event_trigger_data *data);
> +extern int event_enable_trigger_func(struct event_command *cmd_ops,
> + struct trace_event_file *file,
> + char *glob, char *cmd, char *param);
> +extern int event_enable_register_trigger(char *glob,
> + struct event_trigger_ops *ops,
> + struct event_trigger_data *data,
> + struct trace_event_file *file);
> +extern void event_enable_unregister_trigger(char *glob,
> + struct event_trigger_ops *ops,
> + struct event_trigger_data *test,
> + struct trace_event_file *file);
> extern void trigger_data_free(struct event_trigger_data *data);
> extern int event_trigger_init(struct event_trigger_ops *ops,
> struct event_trigger_data *data);
> @@ -1134,6 +1164,8 @@ extern int set_trigger_filter(char *filter_str,
> struct event_trigger_data *trigger_data,
> struct trace_event_file *file);
> extern int register_event_command(struct event_command *cmd);
> +extern int unregister_event_command(struct event_command *cmd);
> +extern int register_trigger_hist_enable_disable_cmds(void);
>
> /**
> * struct event_trigger_ops - callbacks for trace event triggers
> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> index 4ba7645..6a43611 100644
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -1345,3 +1345,118 @@ __init int register_trigger_hist_cmd(void)
>
> return ret;
> }
> +
> +static void
> +hist_enable_trigger(struct event_trigger_data *data, void *rec)
> +{
> + struct enable_trigger_data *enable_data = data->private_data;
> + struct event_trigger_data *test;
> +
> + list_for_each_entry_rcu(test, &enable_data->file->triggers, list) {
> + if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
> + if (enable_data->enable)
> + test->paused = false;
> + else
> + test->paused = true;
> + break;
> + }
> + }
> +}
> +
> +static void
> +hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
> +{
> + if (!data->count)
> + return;
> +
> + if (data->count != -1)
> + (data->count)--;
> +
> + hist_enable_trigger(data, rec);
> +}
> +
> +static struct event_trigger_ops hist_enable_trigger_ops = {
> + .func = hist_enable_trigger,
> + .print = event_enable_trigger_print,
> + .init = event_trigger_init,
> + .free = event_enable_trigger_free,
> +};
> +
> +static struct event_trigger_ops hist_enable_count_trigger_ops = {
> + .func = hist_enable_count_trigger,
> + .print = event_enable_trigger_print,
> + .init = event_trigger_init,
> + .free = event_enable_trigger_free,
> +};
> +
> +static struct event_trigger_ops hist_disable_trigger_ops = {
> + .func = hist_enable_trigger,
> + .print = event_enable_trigger_print,
> + .init = event_trigger_init,
> + .free = event_enable_trigger_free,
> +};
> +
> +static struct event_trigger_ops hist_disable_count_trigger_ops = {
> + .func = hist_enable_count_trigger,
> + .print = event_enable_trigger_print,
> + .init = event_trigger_init,
> + .free = event_enable_trigger_free,
> +};
> +
> +static struct event_trigger_ops *
> +hist_enable_get_trigger_ops(char *cmd, char *param)
> +{
> + struct event_trigger_ops *ops;
> + bool enable;
> +
> + enable = (strcmp(cmd, ENABLE_HIST_STR) == 0);
> +
> + if (enable)
> + ops = param ? &hist_enable_count_trigger_ops :
> + &hist_enable_trigger_ops;
> + else
> + ops = param ? &hist_disable_count_trigger_ops :
> + &hist_disable_trigger_ops;
> +
> + return ops;
> +}
> +
> +static struct event_command trigger_hist_enable_cmd = {
> + .name = ENABLE_HIST_STR,
> + .trigger_type = ETT_HIST_ENABLE,
> + .func = event_enable_trigger_func,
> + .reg = event_enable_register_trigger,
> + .unreg = event_enable_unregister_trigger,
> + .get_trigger_ops = hist_enable_get_trigger_ops,
> + .set_filter = set_trigger_filter,
> +};
> +
> +static struct event_command trigger_hist_disable_cmd = {
> + .name = DISABLE_HIST_STR,
> + .trigger_type = ETT_HIST_ENABLE,
> + .func = event_enable_trigger_func,
> + .reg = event_enable_register_trigger,
> + .unreg = event_enable_unregister_trigger,
> + .get_trigger_ops = hist_enable_get_trigger_ops,
> + .set_filter = set_trigger_filter,
> +};
> +
> +static __init void unregister_trigger_hist_enable_disable_cmds(void)
> +{
> + unregister_event_command(&trigger_hist_enable_cmd);
> + unregister_event_command(&trigger_hist_disable_cmd);
> +}
> +
> +__init int register_trigger_hist_enable_disable_cmds(void)
> +{
> + int ret;
> +
> + ret = register_event_command(&trigger_hist_enable_cmd);
> + if (WARN_ON(ret < 0))
> + return ret;
> + ret = register_event_command(&trigger_hist_disable_cmd);
> + if (WARN_ON(ret < 0))
> + unregister_trigger_hist_enable_disable_cmds();
> +
> + return ret;
> +}
> diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
> index e80f30b..9490d8f 100644
> --- a/kernel/trace/trace_events_trigger.c
> +++ b/kernel/trace/trace_events_trigger.c
> @@ -338,7 +338,7 @@ __init int register_event_command(struct event_command *cmd)
> * Currently we only unregister event commands from __init, so mark
> * this __init too.
> */
> -static __init int unregister_event_command(struct event_command *cmd)
> +__init int unregister_event_command(struct event_command *cmd)
> {
> struct event_command *p, *n;
> int ret = -ENODEV;
> @@ -1052,15 +1052,6 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
> unregister_event_command(&trigger_traceoff_cmd);
> }
>
> -/* Avoid typos */
> -#define ENABLE_EVENT_STR "enable_event"
> -#define DISABLE_EVENT_STR "disable_event"
> -
> -struct enable_trigger_data {
> - struct trace_event_file *file;
> - bool enable;
> -};
> -
> static void
> event_enable_trigger(struct event_trigger_data *data, void *rec)
> {
> @@ -1090,14 +1081,16 @@ event_enable_count_trigger(struct event_trigger_data *data, void *rec)
> event_enable_trigger(data, rec);
> }
>
> -static int
> -event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> - struct event_trigger_data *data)
> +int event_enable_trigger_print(struct seq_file *m,
> + struct event_trigger_ops *ops,
> + struct event_trigger_data *data)
> {
> struct enable_trigger_data *enable_data = data->private_data;
>
> seq_printf(m, "%s:%s:%s",
> - enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
> + enable_data->hist ?
> + (enable_data->enable ? ENABLE_HIST_STR : DISABLE_HIST_STR) :
> + (enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR),
> enable_data->file->event_call->class->system,
> trace_event_name(enable_data->file->event_call));
>
> @@ -1114,9 +1107,8 @@ event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> return 0;
> }
>
> -static void
> -event_enable_trigger_free(struct event_trigger_ops *ops,
> - struct event_trigger_data *data)
> +void event_enable_trigger_free(struct event_trigger_ops *ops,
> + struct event_trigger_data *data)
> {
> struct enable_trigger_data *enable_data = data->private_data;
>
> @@ -1161,10 +1153,9 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
> .free = event_enable_trigger_free,
> };
>
> -static int
> -event_enable_trigger_func(struct event_command *cmd_ops,
> - struct trace_event_file *file,
> - char *glob, char *cmd, char *param)
> +int event_enable_trigger_func(struct event_command *cmd_ops,
> + struct trace_event_file *file,
> + char *glob, char *cmd, char *param)
> {
> struct trace_event_file *event_enable_file;
> struct enable_trigger_data *enable_data;
> @@ -1173,6 +1164,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> struct trace_array *tr = file->tr;
> const char *system;
> const char *event;
> + bool hist = false;
> char *trigger;
> char *number;
> bool enable;
> @@ -1197,8 +1189,15 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> if (!event_enable_file)
> goto out;
>
> - enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> +#ifdef CONFIG_HIST_TRIGGERS
> + hist = ((strcmp(cmd, ENABLE_HIST_STR) == 0) ||
> + (strcmp(cmd, DISABLE_HIST_STR) == 0));
>
> + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
> + (strcmp(cmd, ENABLE_HIST_STR) == 0));
> +#else
> + enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> +#endif
> trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
>
> ret = -ENOMEM;
> @@ -1218,6 +1217,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> INIT_LIST_HEAD(&trigger_data->list);
> RCU_INIT_POINTER(trigger_data->filter, NULL);
>
> + enable_data->hist = hist;
> enable_data->enable = enable;
> enable_data->file = event_enable_file;
> trigger_data->private_data = enable_data;
> @@ -1295,10 +1295,10 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> goto out;
> }
>
> -static int event_enable_register_trigger(char *glob,
> - struct event_trigger_ops *ops,
> - struct event_trigger_data *data,
> - struct trace_event_file *file)
> +int event_enable_register_trigger(char *glob,
> + struct event_trigger_ops *ops,
> + struct event_trigger_data *data,
> + struct trace_event_file *file)
> {
> struct enable_trigger_data *enable_data = data->private_data;
> struct enable_trigger_data *test_enable_data;
> @@ -1308,6 +1308,8 @@ static int event_enable_register_trigger(char *glob,
> list_for_each_entry_rcu(test, &file->triggers, list) {
> test_enable_data = test->private_data;
> if (test_enable_data &&
> + (test->cmd_ops->trigger_type ==
> + data->cmd_ops->trigger_type) &&
> (test_enable_data->file == enable_data->file)) {
> ret = -EEXIST;
> goto out;
> @@ -1333,10 +1335,10 @@ out:
> return ret;
> }
>
> -static void event_enable_unregister_trigger(char *glob,
> - struct event_trigger_ops *ops,
> - struct event_trigger_data *test,
> - struct trace_event_file *file)
> +void event_enable_unregister_trigger(char *glob,
> + struct event_trigger_ops *ops,
> + struct event_trigger_data *test,
> + struct trace_event_file *file)
> {
> struct enable_trigger_data *test_enable_data = test->private_data;
> struct enable_trigger_data *enable_data;
> @@ -1346,6 +1348,8 @@ static void event_enable_unregister_trigger(char *glob,
> list_for_each_entry_rcu(data, &file->triggers, list) {
> enable_data = data->private_data;
> if (enable_data &&
> + (data->cmd_ops->trigger_type ==
> + test->cmd_ops->trigger_type) &&
> (enable_data->file == test_enable_data->file)) {
> unregistered = true;
> list_del_rcu(&data->list);
> @@ -1365,8 +1369,12 @@ event_enable_get_trigger_ops(char *cmd, char *param)
> struct event_trigger_ops *ops;
> bool enable;
>
> +#ifdef CONFIG_HIST_TRIGGERS
> + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
> + (strcmp(cmd, ENABLE_HIST_STR) == 0));
> +#else
> enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> -
> +#endif
> if (enable)
> ops = param ? &event_enable_count_trigger_ops :
> &event_enable_trigger_ops;
> @@ -1437,6 +1445,7 @@ __init int register_trigger_cmds(void)
> register_trigger_snapshot_cmd();
> register_trigger_stacktrace_cmd();
> register_trigger_enable_disable_cmds();
> + register_trigger_hist_enable_disable_cmds();
> register_trigger_hist_cmd();
>
> return 0;
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

Subject: Re: [PATCH v9 00/22] tracing: 'hist' triggers

Hi Tom,

Thank you for updating your patches, I'm testing it.

And when I'm testing hist trigger, I've found that the .hex modifiers on value
doesn't work, but no semantic error.

[root@localhost tracing]# echo 'hist:keys=parent_pid:vals=common_pid.hex' > events/sched/sched_process_fork/trigger
[root@localhost tracing]#

[root@localhost tracing]# cat events/sched/sched_process_fork/hist
# trigger info: hist:keys=parent_pid:vals=hitcount,common_pid.hex:sort=hitcount:size=2048 [active]

{ parent_pid: 26582 } hitcount: 1 common_pid: 26582
{ parent_pid: 11968 } hitcount: 1 common_pid: 11968
{ parent_pid: 11956 } hitcount: 2 common_pid: 23912

Totals:
Hits: 4
Entries: 3
Dropped: 0

while other modifiers return -EINVAL.

[root@localhost tracing]# echo 'hist:keys=parent_pid:vals=common_pid.execname' > events/sched/sched_process_fork/trigger
-bash: echo: write error: Invalid argument

Thank you,

On 2015/07/17 2:22, Tom Zanussi wrote:
> This is v9 of the 'hist triggers' patchset.
>
> Changes from v8:
>
> Same as v8, but with the RFC patch [ftrace: Add function_hist tracer]
> removed, and rebased to latest trace/for-next.
>
> Changes from v7:
>
> This version refactors the commits as suggested by Masami. There are
> now more commits, but the result should be much more reviewable. The
> ending code is the same as before, modulo a couple minor bug fixes I
> discovered while refactoring and testing.
>
> I've also reviewed and fixed a number of shortcomings and errors in
> the comments, and have added a new discussion of the tracing_map data
> structures after Steve mentioned he found them confusing and/or
> insufficiently documented.
>
> Also, I kept Namhyung's string patch [tracing: Support string type key
> properly] as submitted, but added a follow-on patch that refactors it
> and fixes a problem I found with it that enabled static string keys to
> contain random chars and therefore incorrect map insertions.
>
> Changes from v6:
>
> This version adds a new 'sym-offset' modifier as requested by Masami.
> I implemented it as a modifier rather than using the trace option as
> suggested, in part because I wanted to keep it all self-contained and
> it seemed more consistent to just add it alongside the 'sym' modifier.
> Also, hist triggers arent't really a tracer and therefore don't
> directly tie into the option update/callback mechanism so making use
> of it isn't as simple as a normal tracer.
>
> I also changed the sort key specification to be stricter and signal an
> error if the specified sort key wasn't found (rather than defaulting
> to hitcount in those cases), also suggested by Masami. Thanks,
> Masami, for your input!
>
> Also updated the Documentation and tracing/README to reflect the
> changes.
>
> Changes from v5:
>
> This version adds support for compound keys, along with the related
> ability to sort using primary and secondary keys. This was mentioned
> in previous versions as the last important piece that remained
> unimplemented, and is now implemented. (I didn't have time to get to
> the couple of enhancements suggested by Masami, but I expect to be
> able to add those later on top of these.)
>
> Because we now support compound keys and it's not immediately clear in
> the output exactly which fields correspond to keys, the key(s),
> compound or not, are now enclosed by curly braces.
>
> The Documentation and README have been updated to reflect the changes,
> and several new examples have been added to illustrate how to use
> compound keys.
>
> Also, the code was updated to work with the new ftrace_event_file,
> etc, renaming in tracing/for-next.
>
> Changes from v4:
>
> This version addresses some problems and suggestions made by Daniel
> Wagner - a lot of the code was reworked to get rid of the distinction
> between keys and values, and as a result, both keys and values can be
> used as sort keys. As suggested, it also allows 'val=' to be absent
> in a trigger command - if no 'val' is specified, hitcount is assumed
> and automatically used as the only val.
>
> The map code was also separated out into a separate file,
> tracing_map.c, allowing it to be reused. It also adds a second tracer
> called function_hist that actually does reuse the code, as an RFC
> patch.
>
> Patch 01/10 [tracing: Update cond flag when enabling or disabling..]
> is a fix for a problem noticed by Daniel and that fixes a problem in
> existing trigger code and should be applied regardless of whether the
> rest of the patchset is merged.
>
> As mentioned, patch 10/10 is an RFC patch implementing a new tracer
> based on the function tracer code. It's a fun little tool and is
> useful for a specific problem I'm working on (and is also a nice test
> of the tracing_map code), but is an RFC because first, I'm not sure it
> would really be of general interest and secondly, it's POC-level
> quality and I'd need to spend more time fixing it up to make it
> upstreamable, but I don't want to waste my time if not.
>
> There are a couple of important bits of functionality that were
> present in v1 but not yet reimplemented in v5.
>
> The first is support for compound keys. Currently, maps can only be
> keyed on a single event field, whereas in v1 they could be keyed on
> multiple keys. With support for compound keys, you can create much
> more interesting output, such as for example per-pid lists of
> syscalls or read counts e.g.:
>
> # echo 'hist:keys=common_pid.execname,id.syscall:vals=hitcount' > \
> /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
>
> # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
>
> key: common_pid:bash[3112], id:sys_write vals: count:69
> key: common_pid:bash[3112], id:sys_rt_sigprocmask vals: count:218
>
> key: common_pid:update-notifier[3164], id:sys_poll vals: count:37
> key: common_pid:update-notifier[3164], id:sys_recvfrom vals: count:118
>
> key: common_pid:deja-dup-monito[3194], id:sys_sendto vals: count:1
> key: common_pid:deja-dup-monito[3194], id:sys_read vals: count:4
> key: common_pid:deja-dup-monito[3194], id:sys_poll vals: count:8
> key: common_pid:deja-dup-monito[3194], id:sys_recvmsg vals: count:8
> key: common_pid:deja-dup-monito[3194], id:sys_getegid vals: count:8
>
> key: common_pid:emacs[3275], id:sys_fsync vals: count:1
> key: common_pid:emacs[3275], id:sys_open vals: count:1
> key: common_pid:emacs[3275], id:sys_symlink vals: count:2
> key: common_pid:emacs[3275], id:sys_poll vals: count:23
> key: common_pid:emacs[3275], id:sys_select vals: count:23
> key: common_pid:emacs[3275], id:unknown_syscall vals: count:34
> key: common_pid:emacs[3275], id:sys_ioctl vals: count:60
> key: common_pid:emacs[3275], id:sys_rt_sigprocmask vals: count:116
>
> key: common_pid:cat[3323], id:sys_munmap vals: count:1
> key: common_pid:cat[3323], id:sys_fadvise64 vals: count:1
>
> Related to that is support for sorting on multiple fields. Currently,
> you can sort using only a primary key. Being able to sort on multiple
> or at least a secondary key is indispensible for seeing trends when
> displaying multiple values.
>
> Changes from v3:
>
> v4 fixes the race in tracing_map_insert() noted in v3, where
> map.val.key could be checked even if map.val wasn't yet set. The
> simple fix for that in tracing_map_insert() introduces the possibility
> of duplicates in the map, which though rare, need to be accounted for
> in the output. To address that, duplicate-merging code was added to
> the map-printing code.
>
> It was also pointed out that it didn't seem correct to include
> module.h, but the fix for that has deeper roots and is being addressed
> by a separate patchset; for now we need to continue including
> module.h, though prompted by that I did some other header include
> cleanup.
>
> The functionality remains the same as v2, but this version no longer
> tries to export and use bpf_maps, and more importantly removes the
> associated GFP_NOTRACE/trace event hacks and kmem macros required to
> work around the bpf_map implementation.
>
> The tracing_map functionality is instead built on top of a simple
> lock-free map algorithm originated by Dr. Cliff Click (see references
> in the code for more details), which though too restrictive to be
> general-purpose in its current form, functions nicely as a
> special-purpose tracing map.
>
> v3 also moves the hist triggers code into a separate file and puts it
> all behind a new config option, CONFIG_HIST_TRIGGERS. It also merges
> in the sorting code rather than keeping it as a separate patch.
>
> This patchset also includes a couple other new and related triggers,
> enable_hist and disable_hist, very similar to the existing
> enable_event/disable_event triggers used to automatically enable and
> disable events based on a triggering condition, but in this case
> allowing hist triggers to be enabled and disabled in the same way.
>
> - Added an insert check for val before checking the key associated with val
> - Added code to merge possible duplicates in the map
>
> Changes from v2:
> - reimplemented tracing_map, replacing bpf_map with nmi-safe/lock-free map
> - removed GPF_NOTRACE, kmalloc/free macros and event hacks needed by bpf_maps
> - moved hist triggers from trace_events_trigger.c to trace_events_hist.c
> - added CONFIG_HIST_TRIGGERS config option
> - consolidated sorting code with main patch
>
> Changes from v1:
> - completely rewritten on top of tracing_map (renamed and exported bpf_map)
> - added map clearing and client ops to tracing_map
> - changed the name from 'hash' triggers to 'hist' triggers
> - added new trigger 'pause' feature
> - added new enable_hist and disable_hist triggers
> - added usage for hist/enable_hist/disable hist to tracing/README
> - moved examples into Documentation/trace/event.txt
> - added ___GFP_NOTRACE, kmalloc/kfree macros, and conditional kmem tracepoints
>
> The following changes since commit b44754d8262d3aab842998cf747f44fe6090be9f:
>
> ring_buffer: Allow to exit the ring buffer benchmark immediately (2015-06-15 12:03:12 -0400)
>
> are available in the git repository at:
>
> git://git.yoctoproject.org/linux-yocto-contrib.git tzanussi/hist-triggers-v9
> http://git.yoctoproject.org/cgit/cgit.cgi/linux-yocto-contrib/log/?h=tzanussi/hist-triggers-v9
>
> Namhyung Kim (1):
> tracing: Support string type key properly
>
> Tom Zanussi (21):
> tracing: Update cond flag when enabling or disabling a trigger
> tracing: Make ftrace_event_field checking functions available
> tracing: Make event trigger functions available
> tracing: Add event record param to trigger_ops.func()
> tracing: Add get_syscall_name()
> tracing: Add a per-event-trigger 'paused' field
> tracing: Add lock-free tracing_map
> tracing: Add 'hist' event trigger command
> tracing: Add hist trigger support for multiple values ('vals=' param)
> tracing: Add hist trigger support for compound keys
> tracing: Add hist trigger support for user-defined sorting ('sort='
> param)
> tracing: Add hist trigger support for pausing and continuing a trace
> tracing: Add hist trigger support for clearing a trace
> tracing: Add hist trigger 'hex' modifier for displaying numeric fields
> tracing: Add hist trigger 'sym' and 'sym-offset' modifiers
> tracing: Add hist trigger 'execname' modifier
> tracing: Add hist trigger 'syscall' modifier
> tracing: Add hist trigger support for stacktraces as keys
> tracing: Remove restriction on string position in hist trigger keys
> tracing: Add enable_hist/disable_hist triggers
> tracing: Add 'hist' trigger Documentation
>
> Documentation/trace/events.txt | 1131 +++++++++++++++++++++++++++
> include/linux/trace_events.h | 9 +-
> kernel/trace/Kconfig | 14 +
> kernel/trace/Makefile | 2 +
> kernel/trace/trace.c | 66 ++
> kernel/trace/trace.h | 77 +-
> kernel/trace/trace_events.c | 4 +
> kernel/trace/trace_events_filter.c | 12 -
> kernel/trace/trace_events_hist.c | 1462 +++++++++++++++++++++++++++++++++++
> kernel/trace/trace_events_trigger.c | 149 ++--
> kernel/trace/trace_syscalls.c | 11 +
> kernel/trace/tracing_map.c | 935 ++++++++++++++++++++++
> kernel/trace/tracing_map.h | 258 +++++++
> 13 files changed, 4046 insertions(+), 84 deletions(-)
> create mode 100644 kernel/trace/trace_events_hist.c
> create mode 100644 kernel/trace/tracing_map.c
> create mode 100644 kernel/trace/tracing_map.h
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

Subject: Re: [PATCH v9 02/22] tracing: Make ftrace_event_field checking functions available

On 2015/07/17 2:22, Tom Zanussi wrote:
> Make is_string_field() and is_function_field() accessible outside of
> trace_event_filters.c for other users of ftrace_event_fields.
>
> Signed-off-by: Tom Zanussi <[email protected]

Reviewed-by: Masami Hiramatsu <[email protected]>

BTW, is there any reason why we split this from caller-side change?
this short change can be merged into the patch which actual requires this.

Thanks,

> ---
> kernel/trace/trace.h | 12 ++++++++++++
> kernel/trace/trace_events_filter.c | 12 ------------
> 2 files changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 4c41fcd..891c5b0 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -1050,6 +1050,18 @@ struct filter_pred {
> unsigned short right;
> };
>
> +static inline bool is_string_field(struct ftrace_event_field *field)
> +{
> + return field->filter_type == FILTER_DYN_STRING ||
> + field->filter_type == FILTER_STATIC_STRING ||
> + field->filter_type == FILTER_PTR_STRING;
> +}
> +
> +static inline bool is_function_field(struct ftrace_event_field *field)
> +{
> + return field->filter_type == FILTER_TRACE_FN;
> +}
> +
> extern enum regex_type
> filter_parse_regex(char *buff, int len, char **search, int *not);
> extern void print_event_filter(struct trace_event_file *file,
> diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
> index 71511eb..245ee5d 100644
> --- a/kernel/trace/trace_events_filter.c
> +++ b/kernel/trace/trace_events_filter.c
> @@ -917,18 +917,6 @@ int filter_assign_type(const char *type)
> return FILTER_OTHER;
> }
>
> -static bool is_function_field(struct ftrace_event_field *field)
> -{
> - return field->filter_type == FILTER_TRACE_FN;
> -}
> -
> -static bool is_string_field(struct ftrace_event_field *field)
> -{
> - return field->filter_type == FILTER_DYN_STRING ||
> - field->filter_type == FILTER_STATIC_STRING ||
> - field->filter_type == FILTER_PTR_STRING;
> -}
> -
> static int is_legal_op(struct ftrace_event_field *field, int op)
> {
> if (is_string_field(field) &&
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

Subject: Re: [PATCH v9 01/22] tracing: Update cond flag when enabling or disabling a trigger

On 2015/07/17 2:22, Tom Zanussi wrote:
> When a trigger is enabled, the cond flag should be set beforehand,
> otherwise a trigger that's expecting to process a trace record
> (e.g. one with post_trigger set) could be invoked without one.
>
> Likewise a trigger's cond flag should be reset after it's disabled,
> not before.
>
> Signed-off-by: Tom Zanussi <[email protected]>
> Signed-off-by: Daniel Wagner <[email protected]>

Looks good to me :)

Reviewed-by: Masami Hiramatsu <[email protected]>

Thanks,

> ---
> kernel/trace/trace_events_trigger.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
> index 42a4009..4d2f3cc 100644
> --- a/kernel/trace/trace_events_trigger.c
> +++ b/kernel/trace/trace_events_trigger.c
> @@ -543,11 +543,12 @@ static int register_trigger(char *glob, struct event_trigger_ops *ops,
> list_add_rcu(&data->list, &file->triggers);
> ret++;
>
> + update_cond_flag(file);
> if (trace_event_trigger_enable_disable(file, 1) < 0) {
> list_del_rcu(&data->list);
> + update_cond_flag(file);
> ret--;
> }
> - update_cond_flag(file);
> out:
> return ret;
> }
> @@ -575,8 +576,8 @@ static void unregister_trigger(char *glob, struct event_trigger_ops *ops,
> if (data->cmd_ops->trigger_type == test->cmd_ops->trigger_type) {
> unregistered = true;
> list_del_rcu(&data->list);
> - update_cond_flag(file);
> trace_event_trigger_enable_disable(file, 0);
> + update_cond_flag(file);
> break;
> }
> }
> @@ -1319,11 +1320,12 @@ static int event_enable_register_trigger(char *glob,
> list_add_rcu(&data->list, &file->triggers);
> ret++;
>
> + update_cond_flag(file);
> if (trace_event_trigger_enable_disable(file, 1) < 0) {
> list_del_rcu(&data->list);
> + update_cond_flag(file);
> ret--;
> }
> - update_cond_flag(file);
> out:
> return ret;
> }
> @@ -1344,8 +1346,8 @@ static void event_enable_unregister_trigger(char *glob,
> (enable_data->file == test_enable_data->file)) {
> unregistered = true;
> list_del_rcu(&data->list);
> - update_cond_flag(file);
> trace_event_trigger_enable_disable(file, 0);
> + update_cond_flag(file);
> break;
> }
> }
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

2015-07-21 14:10:58

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 14/22] tracing: Add hist trigger 'hex' modifier for displaying numeric fields

Hi Namhyung,

On Sun, 2015-07-19 at 22:22 +0900, Namhyung Kim wrote:
> Hi Tom,
>
> On Thu, Jul 16, 2015 at 12:22:47PM -0500, Tom Zanussi wrote:
> > Allow users to have numeric fields displayed as hex values in the
> > output by appending '.hex' to field names:
> >
> > # echo hist:keys=aaa,bbb.hex:vals=ccc.hex ... \
> > [ if filter] > event/trigger
> >
> > Signed-off-by: Tom Zanussi <[email protected]>
> > ---
> > kernel/trace/trace.c | 5 +++-
> > kernel/trace/trace_events_hist.c | 49 +++++++++++++++++++++++++++++++++++++---
> > 2 files changed, 50 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 27daa28..14f9472 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -3810,7 +3810,10 @@ static const char readme_msg[] =
> > "\t entry is a simple list of the keys and values comprising the\n"
> > "\t entry; keys are printed first and are delineated by curly\n"
> > "\t braces, and are followed by the set of value fields for the\n"
> > - "\t entry. Numeric fields are displayed as base-10 integers.\n"
> > + "\t entry. By default, numeric fields are displayed as base-10\n"
> > + "\t integers. This can be modified by appending any of the\n"
> > + "\t following modifiers to the field name:\n\n"
> > + "\t .hex display a number as a hex value\n\n"
> > "\t By default, the size of the hash table is 2048 entries. The\n"
> > "\t 'size' param can be used to specify more or fewer than that.\n"
> > "\t The units are in terms of hashtable entries - if a run uses\n"
> > diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> > index d8259fe..9cc38ee 100644
> > --- a/kernel/trace/trace_events_hist.c
> > +++ b/kernel/trace/trace_events_hist.c
> > @@ -72,6 +72,7 @@ enum hist_field_flags {
> > HIST_FIELD_HITCOUNT = 1,
> > HIST_FIELD_KEY = 2,
> > HIST_FIELD_STRING = 4,
> > + HIST_FIELD_HEX = 8,
> > };
> >
> > struct hist_trigger_attrs {
> > @@ -284,9 +285,20 @@ static int create_val_field(struct hist_trigger_data *hist_data,
> > {
> > struct ftrace_event_field *field = NULL;
> > unsigned long flags = 0;
> > + char *field_name;
> > int ret = 0;
> >
> > - field = trace_find_event_field(file->event_call, field_str);
> > + field_name = strsep(&field_str, ".");
> > + if (field_str) {
> > + if (!strcmp(field_str, "hex"))
> > + flags |= HIST_FIELD_HEX;
> > + else {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > + }
> > +
> > + field = trace_find_event_field(file->event_call, field_name);
> > if (!field) {
> > ret = -EINVAL;
> > goto out;
> > @@ -349,11 +361,22 @@ static int create_key_field(struct hist_trigger_data *hist_data,
> > struct ftrace_event_field *field = NULL;
> > unsigned long flags = 0;
> > unsigned int key_size;
> > + char *field_name;
> > int ret = 0;
> >
> > flags |= HIST_FIELD_KEY;
> >
> > - field = trace_find_event_field(file->event_call, field_str);
> > + field_name = strsep(&field_str, ".");
> > + if (field_str) {
> > + if (!strcmp(field_str, "hex"))
> > + flags |= HIST_FIELD_HEX;
> > + else {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > + }
> > +
> > + field = trace_find_event_field(file->event_call, field_name);
> > if (!field) {
> > ret = -EINVAL;
> > goto out;
> > @@ -688,7 +711,11 @@ hist_trigger_entry_print(struct seq_file *m,
> > if (i > hist_data->n_vals)
> > seq_puts(m, ", ");
> >
> > - if (key_field->flags & HIST_FIELD_STRING) {
> > + if (key_field->flags & HIST_FIELD_HEX) {
> > + uval = *(u64 *)(key + key_field->offset);
> > + seq_printf(m, "%s: %llx",
> > + key_field->field->name, uval);
> > + } else if (key_field->flags & HIST_FIELD_STRING) {
> > seq_printf(m, "%s: %-35s", key_field->field->name,
> > (char *)(key + key_field->offset));
> > } else {
>
> It seems the '.hex' modifier only affects key fields' output..
>

Yeah, the .hex modifier seems to have gotten dropped for values in the
patch splitup. Will fix, thanks for pointing it out.

Tom

> Thanks,
> Namhyung
>
>
> > @@ -791,9 +818,25 @@ const struct file_operations event_hist_fops = {
> > .release = single_release,
> > };
> >
> > +static const char *get_hist_field_flags(struct hist_field *hist_field)
> > +{
> > + const char *flags_str = NULL;
> > +
> > + if (hist_field->flags & HIST_FIELD_HEX)
> > + flags_str = "hex";
> > +
> > + return flags_str;
> > +}
> > +
> > static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
> > {
> > seq_printf(m, "%s", hist_field->field->name);
> > + if (hist_field->flags) {
> > + const char *flags_str = get_hist_field_flags(hist_field);
> > +
> > + if (flags_str)
> > + seq_printf(m, ".%s", flags_str);
> > + }
> > }
> >
> > static int event_hist_trigger_print(struct seq_file *m,
> > --
> > 1.9.3
> >

2015-07-21 14:15:54

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 20/22] tracing: Remove restriction on string position in hist trigger keys

On Sun, 2015-07-19 at 22:31 +0900, Namhyung Kim wrote:
> On Thu, Jul 16, 2015 at 12:22:53PM -0500, Tom Zanussi wrote:
> > If we assume the maximum size for a string field, we don't have to
> > worry about its position. Since we only allow two keys in a compound
> > key and having more than one string key in a given compound key
> > doesn't make much sense anyway, trading a bit of extra space instead
> > of introducing an arbitrary restriction makes more sense.
> >
> > We also need to use the event field size for static strings when
> > copying the contents, otherwise we get random garbage in the key.
> >
> > Finally, rearrange the code without changing any functionality by
> > moving the compound key updating code into a separate function.
> >
> > Signed-off-by: Tom Zanussi <[email protected]>
>
> Looks good to me. Just a nitpick below..
>
>
> > ---
> > kernel/trace/trace_events_hist.c | 65 +++++++++++++++++++++++-----------------
> > 1 file changed, 37 insertions(+), 28 deletions(-)
> >
> > diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> > index 67fffee..4ba7645 100644
> > --- a/kernel/trace/trace_events_hist.c
> > +++ b/kernel/trace/trace_events_hist.c
> > @@ -508,8 +508,8 @@ static int create_key_field(struct hist_trigger_data *hist_data,
> > goto out;
> > }
> >
> > - if (is_string_field(field)) /* should be last key field */
> > - key_size = HIST_KEY_SIZE_MAX - key_offset;
> > + if (is_string_field(field))
> > + key_size = MAX_FILTER_STR_VAL;
> > else
> > key_size = field->size;
> > }
> > @@ -781,9 +781,36 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
> > }
> > }
> >
> > +static inline void add_to_key(char *compound_key, void *key,
> > + struct hist_field *key_field, void *rec)
> > +{
> > + size_t size = key_field->size;
> > +
> > + if (key_field->flags & HIST_FIELD_STRING) {
> > + struct ftrace_event_field *field;
> > +
> > + /* ensure NULL-termination */
> > + size--;
>
> This is unnecessary since the size value will be updated below anyway.
> I think it's enough just to move the comment to ...
>
>
> > +
> > + field = key_field->field;
> > + if (field->filter_type == FILTER_DYN_STRING)
> > + size = *(u32 *)(rec + field->offset) >> 16;
> > + else if (field->filter_type == FILTER_PTR_STRING)
> > + size = strlen(key);
> > + else if (field->filter_type == FILTER_STATIC_STRING)
> > + size = field->size;
> > +
>
> ... here. :)
>

Yep, makes sense, will do.

Tom

> > + if (size > key_field->size - 1)
> > + size = key_field->size - 1;
> > + }
> > +
> > + memcpy(compound_key + key_field->offset, key, size);
> > +}
> > +
> > static void event_hist_trigger(struct event_trigger_data *data, void *rec)
> > {
> > struct hist_trigger_data *hist_data = data->private_data;
> > + bool use_compound_key = (hist_data->n_keys > 1);
> > unsigned long entries[HIST_STACKTRACE_DEPTH];
> > char compound_key[HIST_KEY_SIZE_MAX];
> > struct stack_trace stacktrace;
> > @@ -798,8 +825,7 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
> > return;
> > }
> >
> > - if (hist_data->n_keys > 1)
> > - memset(compound_key, 0, hist_data->key_size);
> > + memset(compound_key, 0, hist_data->key_size);
> >
> > for (i = hist_data->n_vals; i < hist_data->n_fields; i++) {
> > key_field = hist_data->fields[i];
> > @@ -816,35 +842,18 @@ static void event_hist_trigger(struct event_trigger_data *data, void *rec)
> > key = entries;
> > } else {
> > field_contents = key_field->fn(key_field, rec);
> > - if (key_field->flags & HIST_FIELD_STRING)
> > + if (key_field->flags & HIST_FIELD_STRING) {
> > key = (void *)field_contents;
> > - else
> > + use_compound_key = true;
> > + } else
> > key = (void *)&field_contents;
> > -
> > - if (hist_data->n_keys > 1) {
> > - /* ensure NULL-termination */
> > - size_t size = key_field->size - 1;
> > -
> > - if (key_field->flags & HIST_FIELD_STRING) {
> > - struct ftrace_event_field *field;
> > -
> > - field = key_field->field;
> > - if (field->filter_type == FILTER_DYN_STRING)
> > - size = *(u32 *)(rec + field->offset) >> 16;
> > - else if (field->filter_type == FILTER_PTR_STRING)
> > - size = strlen(key);
> > -
> > - if (size > key_field->size - 1)
> > - size = key_field->size - 1;
> > - }
> > -
> > - memcpy(compound_key + key_field->offset, key,
> > - size);
> > - }
> > }
> > +
> > + if (use_compound_key)
> > + add_to_key(compound_key, key, key_field, rec);
> > }
> >
> > - if (hist_data->n_keys > 1)
> > + if (use_compound_key)
> > key = compound_key;
> >
> > elt = tracing_map_insert(hist_data->map, key);
> > --
> > 1.9.3
> >

2015-07-21 14:17:10

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 08/22] tracing: Add 'hist' event trigger command

Hi Masami,

On Mon, 2015-07-20 at 22:37 +0900, Masami Hiramatsu wrote:
> Hi Tom,
>
> On 2015/07/17 2:22, Tom Zanussi wrote:
>
> > @@ -3782,6 +3785,32 @@ static const char readme_msg[] =
> > "\t To remove a trigger with a count:\n"
> > "\t echo '!<trigger>:0 > <system>/<event>/trigger\n"
> > "\t Filters can be ignored when removing a trigger.\n"
> > +#ifdef CONFIG_HIST_TRIGGERS
> > + " hist trigger\t- If set, event hits are aggregated into a hash table\n"
> > + "\t Format: hist:keys=<field1>\n"
> > + "\t [:size=#entries]\n"
> > + "\t [if <filter>]\n\n"
> > + "\t When a matching event is hit, an entry is added to a hash\n"
> > + "\t table using the key named. Keys correspond to fields in the\n"
> > + "\t event's format description. On an event hit, the value of a\n"
> > + "\t sum called 'hitcount' is incremented, which is simply a count\n"
> > + "\t of event hits. Keys can be any field.\n\n"
> > + "\t Reading the 'hist' file for the event will dump the hash\n"
> > + "\t table in its entirety to stdout. Each printed hash table\n"
> > + "\t entry is a simple list of the keys and values comprising the\n"
> > + "\t entry; keys are printed first and are delineated by curly\n"
> > + "\t braces, and are followed by the set of value fields for the\n"
> > + "\t entry. Numeric fields are displayed as base-10 integers.\n"
> > + "\t By default, the size of the hash table is 2048 entries. The\n"
> > + "\t 'size' param can be used to specify more or fewer than that.\n"
> > + "\t The units are in terms of hashtable entries - if a run uses\n"
> > + "\t more entries than specified, the results will show the number\n"
> > + "\t of 'drops', the number of hits that were ignored. The size\n"
> > + "\t should be a power of 2 between 128 and 131072 (any non-\n"
> > + "\t power-of-2 number specified will be rounded up).\n\n"
> > + "\t The entries are sorted by 'hitcount' and the sort order is\n"
> > + "\t 'ascending'.\n\n"
>
>
> Hmm, this seems too much about implementation of histogram. Could you shorten this
> to be a half ?
>

OK, I'll try to cut it down to just usage essentials.

Tom

>
> Thank you,
>
>

2015-07-21 14:19:51

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 00/22] tracing: 'hist' triggers

On Tue, 2015-07-21 at 18:40 +0900, Masami Hiramatsu wrote:
> Hi Tom,
>
> Thank you for updating your patches, I'm testing it.
>
> And when I'm testing hist trigger, I've found that the .hex modifiers on value
> doesn't work, but no semantic error.
>

Yeah, the hex modifier on values got dropped when refactoring the
patchset, will fix. Thanks for testing and pointing it out.

Tom

> [root@localhost tracing]# echo 'hist:keys=parent_pid:vals=common_pid.hex' > events/sched/sched_process_fork/trigger
> [root@localhost tracing]#
>
> [root@localhost tracing]# cat events/sched/sched_process_fork/hist
> # trigger info: hist:keys=parent_pid:vals=hitcount,common_pid.hex:sort=hitcount:size=2048 [active]
>
> { parent_pid: 26582 } hitcount: 1 common_pid: 26582
> { parent_pid: 11968 } hitcount: 1 common_pid: 11968
> { parent_pid: 11956 } hitcount: 2 common_pid: 23912
>
> Totals:
> Hits: 4
> Entries: 3
> Dropped: 0
>
> while other modifiers return -EINVAL.
>
> [root@localhost tracing]# echo 'hist:keys=parent_pid:vals=common_pid.execname' > events/sched/sched_process_fork/trigger
> -bash: echo: write error: Invalid argument
>
> Thank you,
>
> On 2015/07/17 2:22, Tom Zanussi wrote:
> > This is v9 of the 'hist triggers' patchset.
> >
> > Changes from v8:
> >
> > Same as v8, but with the RFC patch [ftrace: Add function_hist tracer]
> > removed, and rebased to latest trace/for-next.
> >
> > Changes from v7:
> >
> > This version refactors the commits as suggested by Masami. There are
> > now more commits, but the result should be much more reviewable. The
> > ending code is the same as before, modulo a couple minor bug fixes I
> > discovered while refactoring and testing.
> >
> > I've also reviewed and fixed a number of shortcomings and errors in
> > the comments, and have added a new discussion of the tracing_map data
> > structures after Steve mentioned he found them confusing and/or
> > insufficiently documented.
> >
> > Also, I kept Namhyung's string patch [tracing: Support string type key
> > properly] as submitted, but added a follow-on patch that refactors it
> > and fixes a problem I found with it that enabled static string keys to
> > contain random chars and therefore incorrect map insertions.
> >
> > Changes from v6:
> >
> > This version adds a new 'sym-offset' modifier as requested by Masami.
> > I implemented it as a modifier rather than using the trace option as
> > suggested, in part because I wanted to keep it all self-contained and
> > it seemed more consistent to just add it alongside the 'sym' modifier.
> > Also, hist triggers arent't really a tracer and therefore don't
> > directly tie into the option update/callback mechanism so making use
> > of it isn't as simple as a normal tracer.
> >
> > I also changed the sort key specification to be stricter and signal an
> > error if the specified sort key wasn't found (rather than defaulting
> > to hitcount in those cases), also suggested by Masami. Thanks,
> > Masami, for your input!
> >
> > Also updated the Documentation and tracing/README to reflect the
> > changes.
> >
> > Changes from v5:
> >
> > This version adds support for compound keys, along with the related
> > ability to sort using primary and secondary keys. This was mentioned
> > in previous versions as the last important piece that remained
> > unimplemented, and is now implemented. (I didn't have time to get to
> > the couple of enhancements suggested by Masami, but I expect to be
> > able to add those later on top of these.)
> >
> > Because we now support compound keys and it's not immediately clear in
> > the output exactly which fields correspond to keys, the key(s),
> > compound or not, are now enclosed by curly braces.
> >
> > The Documentation and README have been updated to reflect the changes,
> > and several new examples have been added to illustrate how to use
> > compound keys.
> >
> > Also, the code was updated to work with the new ftrace_event_file,
> > etc, renaming in tracing/for-next.
> >
> > Changes from v4:
> >
> > This version addresses some problems and suggestions made by Daniel
> > Wagner - a lot of the code was reworked to get rid of the distinction
> > between keys and values, and as a result, both keys and values can be
> > used as sort keys. As suggested, it also allows 'val=' to be absent
> > in a trigger command - if no 'val' is specified, hitcount is assumed
> > and automatically used as the only val.
> >
> > The map code was also separated out into a separate file,
> > tracing_map.c, allowing it to be reused. It also adds a second tracer
> > called function_hist that actually does reuse the code, as an RFC
> > patch.
> >
> > Patch 01/10 [tracing: Update cond flag when enabling or disabling..]
> > is a fix for a problem noticed by Daniel and that fixes a problem in
> > existing trigger code and should be applied regardless of whether the
> > rest of the patchset is merged.
> >
> > As mentioned, patch 10/10 is an RFC patch implementing a new tracer
> > based on the function tracer code. It's a fun little tool and is
> > useful for a specific problem I'm working on (and is also a nice test
> > of the tracing_map code), but is an RFC because first, I'm not sure it
> > would really be of general interest and secondly, it's POC-level
> > quality and I'd need to spend more time fixing it up to make it
> > upstreamable, but I don't want to waste my time if not.
> >
> > There are a couple of important bits of functionality that were
> > present in v1 but not yet reimplemented in v5.
> >
> > The first is support for compound keys. Currently, maps can only be
> > keyed on a single event field, whereas in v1 they could be keyed on
> > multiple keys. With support for compound keys, you can create much
> > more interesting output, such as for example per-pid lists of
> > syscalls or read counts e.g.:
> >
> > # echo 'hist:keys=common_pid.execname,id.syscall:vals=hitcount' > \
> > /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
> >
> > # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
> >
> > key: common_pid:bash[3112], id:sys_write vals: count:69
> > key: common_pid:bash[3112], id:sys_rt_sigprocmask vals: count:218
> >
> > key: common_pid:update-notifier[3164], id:sys_poll vals: count:37
> > key: common_pid:update-notifier[3164], id:sys_recvfrom vals: count:118
> >
> > key: common_pid:deja-dup-monito[3194], id:sys_sendto vals: count:1
> > key: common_pid:deja-dup-monito[3194], id:sys_read vals: count:4
> > key: common_pid:deja-dup-monito[3194], id:sys_poll vals: count:8
> > key: common_pid:deja-dup-monito[3194], id:sys_recvmsg vals: count:8
> > key: common_pid:deja-dup-monito[3194], id:sys_getegid vals: count:8
> >
> > key: common_pid:emacs[3275], id:sys_fsync vals: count:1
> > key: common_pid:emacs[3275], id:sys_open vals: count:1
> > key: common_pid:emacs[3275], id:sys_symlink vals: count:2
> > key: common_pid:emacs[3275], id:sys_poll vals: count:23
> > key: common_pid:emacs[3275], id:sys_select vals: count:23
> > key: common_pid:emacs[3275], id:unknown_syscall vals: count:34
> > key: common_pid:emacs[3275], id:sys_ioctl vals: count:60
> > key: common_pid:emacs[3275], id:sys_rt_sigprocmask vals: count:116
> >
> > key: common_pid:cat[3323], id:sys_munmap vals: count:1
> > key: common_pid:cat[3323], id:sys_fadvise64 vals: count:1
> >
> > Related to that is support for sorting on multiple fields. Currently,
> > you can sort using only a primary key. Being able to sort on multiple
> > or at least a secondary key is indispensible for seeing trends when
> > displaying multiple values.
> >
> > Changes from v3:
> >
> > v4 fixes the race in tracing_map_insert() noted in v3, where
> > map.val.key could be checked even if map.val wasn't yet set. The
> > simple fix for that in tracing_map_insert() introduces the possibility
> > of duplicates in the map, which though rare, need to be accounted for
> > in the output. To address that, duplicate-merging code was added to
> > the map-printing code.
> >
> > It was also pointed out that it didn't seem correct to include
> > module.h, but the fix for that has deeper roots and is being addressed
> > by a separate patchset; for now we need to continue including
> > module.h, though prompted by that I did some other header include
> > cleanup.
> >
> > The functionality remains the same as v2, but this version no longer
> > tries to export and use bpf_maps, and more importantly removes the
> > associated GFP_NOTRACE/trace event hacks and kmem macros required to
> > work around the bpf_map implementation.
> >
> > The tracing_map functionality is instead built on top of a simple
> > lock-free map algorithm originated by Dr. Cliff Click (see references
> > in the code for more details), which though too restrictive to be
> > general-purpose in its current form, functions nicely as a
> > special-purpose tracing map.
> >
> > v3 also moves the hist triggers code into a separate file and puts it
> > all behind a new config option, CONFIG_HIST_TRIGGERS. It also merges
> > in the sorting code rather than keeping it as a separate patch.
> >
> > This patchset also includes a couple other new and related triggers,
> > enable_hist and disable_hist, very similar to the existing
> > enable_event/disable_event triggers used to automatically enable and
> > disable events based on a triggering condition, but in this case
> > allowing hist triggers to be enabled and disabled in the same way.
> >
> > - Added an insert check for val before checking the key associated with val
> > - Added code to merge possible duplicates in the map
> >
> > Changes from v2:
> > - reimplemented tracing_map, replacing bpf_map with nmi-safe/lock-free map
> > - removed GPF_NOTRACE, kmalloc/free macros and event hacks needed by bpf_maps
> > - moved hist triggers from trace_events_trigger.c to trace_events_hist.c
> > - added CONFIG_HIST_TRIGGERS config option
> > - consolidated sorting code with main patch
> >
> > Changes from v1:
> > - completely rewritten on top of tracing_map (renamed and exported bpf_map)
> > - added map clearing and client ops to tracing_map
> > - changed the name from 'hash' triggers to 'hist' triggers
> > - added new trigger 'pause' feature
> > - added new enable_hist and disable_hist triggers
> > - added usage for hist/enable_hist/disable hist to tracing/README
> > - moved examples into Documentation/trace/event.txt
> > - added ___GFP_NOTRACE, kmalloc/kfree macros, and conditional kmem tracepoints
> >
> > The following changes since commit b44754d8262d3aab842998cf747f44fe6090be9f:
> >
> > ring_buffer: Allow to exit the ring buffer benchmark immediately (2015-06-15 12:03:12 -0400)
> >
> > are available in the git repository at:
> >
> > git://git.yoctoproject.org/linux-yocto-contrib.git tzanussi/hist-triggers-v9
> > http://git.yoctoproject.org/cgit/cgit.cgi/linux-yocto-contrib/log/?h=tzanussi/hist-triggers-v9
> >
> > Namhyung Kim (1):
> > tracing: Support string type key properly
> >
> > Tom Zanussi (21):
> > tracing: Update cond flag when enabling or disabling a trigger
> > tracing: Make ftrace_event_field checking functions available
> > tracing: Make event trigger functions available
> > tracing: Add event record param to trigger_ops.func()
> > tracing: Add get_syscall_name()
> > tracing: Add a per-event-trigger 'paused' field
> > tracing: Add lock-free tracing_map
> > tracing: Add 'hist' event trigger command
> > tracing: Add hist trigger support for multiple values ('vals=' param)
> > tracing: Add hist trigger support for compound keys
> > tracing: Add hist trigger support for user-defined sorting ('sort='
> > param)
> > tracing: Add hist trigger support for pausing and continuing a trace
> > tracing: Add hist trigger support for clearing a trace
> > tracing: Add hist trigger 'hex' modifier for displaying numeric fields
> > tracing: Add hist trigger 'sym' and 'sym-offset' modifiers
> > tracing: Add hist trigger 'execname' modifier
> > tracing: Add hist trigger 'syscall' modifier
> > tracing: Add hist trigger support for stacktraces as keys
> > tracing: Remove restriction on string position in hist trigger keys
> > tracing: Add enable_hist/disable_hist triggers
> > tracing: Add 'hist' trigger Documentation
> >
> > Documentation/trace/events.txt | 1131 +++++++++++++++++++++++++++
> > include/linux/trace_events.h | 9 +-
> > kernel/trace/Kconfig | 14 +
> > kernel/trace/Makefile | 2 +
> > kernel/trace/trace.c | 66 ++
> > kernel/trace/trace.h | 77 +-
> > kernel/trace/trace_events.c | 4 +
> > kernel/trace/trace_events_filter.c | 12 -
> > kernel/trace/trace_events_hist.c | 1462 +++++++++++++++++++++++++++++++++++
> > kernel/trace/trace_events_trigger.c | 149 ++--
> > kernel/trace/trace_syscalls.c | 11 +
> > kernel/trace/tracing_map.c | 935 ++++++++++++++++++++++
> > kernel/trace/tracing_map.h | 258 +++++++
> > 13 files changed, 4046 insertions(+), 84 deletions(-)
> > create mode 100644 kernel/trace/trace_events_hist.c
> > create mode 100644 kernel/trace/tracing_map.c
> > create mode 100644 kernel/trace/tracing_map.h
> >
>
>

2015-07-21 14:26:06

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 02/22] tracing: Make ftrace_event_field checking functions available

On Tue, 2015-07-21 at 19:04 +0900, Masami Hiramatsu wrote:
> On 2015/07/17 2:22, Tom Zanussi wrote:
> > Make is_string_field() and is_function_field() accessible outside of
> > trace_event_filters.c for other users of ftrace_event_fields.
> >
> > Signed-off-by: Tom Zanussi <[email protected]
>
> Reviewed-by: Masami Hiramatsu <[email protected]>
>
> BTW, is there any reason why we split this from caller-side change?
> this short change can be merged into the patch which actual requires this.
>

I kept it separate because I thought it would be useful regardless of
whether the rest of the hist triggers patchset got merged or not. So
I'd rather keep it separate for that reason.

Tom

> Thanks,
>
> > ---
> > kernel/trace/trace.h | 12 ++++++++++++
> > kernel/trace/trace_events_filter.c | 12 ------------
> > 2 files changed, 12 insertions(+), 12 deletions(-)
> >
> > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> > index 4c41fcd..891c5b0 100644
> > --- a/kernel/trace/trace.h
> > +++ b/kernel/trace/trace.h
> > @@ -1050,6 +1050,18 @@ struct filter_pred {
> > unsigned short right;
> > };
> >
> > +static inline bool is_string_field(struct ftrace_event_field *field)
> > +{
> > + return field->filter_type == FILTER_DYN_STRING ||
> > + field->filter_type == FILTER_STATIC_STRING ||
> > + field->filter_type == FILTER_PTR_STRING;
> > +}
> > +
> > +static inline bool is_function_field(struct ftrace_event_field *field)
> > +{
> > + return field->filter_type == FILTER_TRACE_FN;
> > +}
> > +
> > extern enum regex_type
> > filter_parse_regex(char *buff, int len, char **search, int *not);
> > extern void print_event_filter(struct trace_event_file *file,
> > diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
> > index 71511eb..245ee5d 100644
> > --- a/kernel/trace/trace_events_filter.c
> > +++ b/kernel/trace/trace_events_filter.c
> > @@ -917,18 +917,6 @@ int filter_assign_type(const char *type)
> > return FILTER_OTHER;
> > }
> >
> > -static bool is_function_field(struct ftrace_event_field *field)
> > -{
> > - return field->filter_type == FILTER_TRACE_FN;
> > -}
> > -
> > -static bool is_string_field(struct ftrace_event_field *field)
> > -{
> > - return field->filter_type == FILTER_DYN_STRING ||
> > - field->filter_type == FILTER_STATIC_STRING ||
> > - field->filter_type == FILTER_PTR_STRING;
> > -}
> > -
> > static int is_legal_op(struct ftrace_event_field *field, int op)
> > {
> > if (is_string_field(field) &&
> >
>
>

2015-07-21 16:10:53

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 21/22] tracing: Add enable_hist/disable_hist triggers

Hi Masami,

On Mon, 2015-07-20 at 23:57 +0900, Masami Hiramatsu wrote:
> On 2015/07/17 2:22, Tom Zanussi wrote:
> > Similar to enable_event/disable_event triggers, these triggers enable
> > and disable the aggregation of events into maps rather than enabling
> > and disabling their writing into the trace buffer.
> >
> > They can be used to automatically start and stop hist triggers based
> > on a matching filter condition.
> >
> > If there's a paused hist trigger on system:event, the following would
> > start it when the filter condition was hit:
> >
> > # echo enable_hist:system:event [ if filter] > event/trigger
> >
> > And the following would disable a running system:event hist trigger:
> >
> > # echo disable_hist:system:event [ if filter] > event/trigger
> >
> > See Documentation/trace/events.txt for real examples.
>
> Hmm, do we really need this? Since we've already had multiple instances,
> if someone wants to make histogram separated from event logger, he/she can
> make another instance for that, and disable/enable event itself.
>
> I'm considering if we accept this methods, we'll need to accept another
> enable/disable triggers for each action too in the future.
>

OK, I haven't implemented multiple instances yet, but if I understand
you correctly, what you're suggesting is that we can accomplish the same
thing, by setting up a disabled histogram on system:event and then
simply using the existing enable_event:system:event trigger to turn it
on. Likewise the opposite to disable it.

I guess we need to add the histogram instance name to the syntax of
enable_event:system:event to be able to make the distinction between a
histogram and the current behavior of enabling logging.

So here's what we currently have. This sets up a histogram and starts
it running, and the user cats event/hist to get the results:

# echo hist:keys=xxx > event1/trigger
# cat event1/hist

And separately the existing enable_event trigger, which enables event1
(starts it logging to the event logger, and has nothing to do with
histograms) when event2 is hit:

# echo enable_event:system:event1 > event2/trigger

So to extend enable_event to support histograms, we need to be able to
do this, first set up a paused histogram:

# echo hist:keys=xxx:pause > event1/trigger
# cat event1/hist

Which would be enabled via enable_event like this:

# echo enable_event:system:event1:hist > event2/trigger

Of course 'hist' refers to the initial single-instance histogram - if we
had multiple instances, 'hist' would be replaced by the instance name
e.g. to set up two different histograms, each with a different filter:

# echo hist:keys=xxx:pause if filter1 > event1/trigger
# cat event1/hist

# echo hist:keys=xxx:pause if filter2 > event1/trigger
# cat event1/hist2

To enable the first histogram when event2 is hit:

# echo enable_event:system:event1:hist > event2/trigger

And to enable the second histogram when event2 is hit:

# echo enable_event:system:event1:hist2 > event2/trigger

Does that align with what you were thinking regarding both instances and
the enable/disable_event triggers? If not, some more explanation and
examples would help ;-)

Thanks,

Tom

> Thank you,
>
> >
> > Signed-off-by: Tom Zanussi <[email protected]>
> > ---
> > include/linux/trace_events.h | 1 +
> > kernel/trace/trace.c | 11 ++++
> > kernel/trace/trace.h | 32 ++++++++++
> > kernel/trace/trace_events_hist.c | 115 ++++++++++++++++++++++++++++++++++++
> > kernel/trace/trace_events_trigger.c | 71 ++++++++++++----------
> > 5 files changed, 199 insertions(+), 31 deletions(-)
> >
> > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > index 0faf48b..0f3ffdd 100644
> > --- a/include/linux/trace_events.h
> > +++ b/include/linux/trace_events.h
> > @@ -411,6 +411,7 @@ enum event_trigger_type {
> > ETT_STACKTRACE = (1 << 2),
> > ETT_EVENT_ENABLE = (1 << 3),
> > ETT_EVENT_HIST = (1 << 4),
> > + ETT_HIST_ENABLE = (1 << 5),
> > };
> >
> > extern int filter_match_preds(struct event_filter *filter, void *rec);
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 16c64a2..c581750 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -3761,6 +3761,10 @@ static const char readme_msg[] =
> > "\t trigger: traceon, traceoff\n"
> > "\t enable_event:<system>:<event>\n"
> > "\t disable_event:<system>:<event>\n"
> > +#ifdef CONFIG_HIST_TRIGGERS
> > + "\t enable_hist:<system>:<event>\n"
> > + "\t disable_hist:<system>:<event>\n"
> > +#endif
> > #ifdef CONFIG_STACKTRACE
> > "\t\t stacktrace\n"
> > #endif
> > @@ -3836,6 +3840,13 @@ static const char readme_msg[] =
> > "\t restart a paused hist trigger.\n\n"
> > "\t The 'clear' param will clear the contents of a running hist\n"
> > "\t trigger and leave its current paused/active state.\n\n"
> > + "\t The enable_hist and disable_hist triggers can be used to\n"
> > + "\t have one event conditionally start and stop another event's\n"
> > + "\t already-attached hist trigger. Any number of enable_hist\n"
> > + "\t and disable_hist triggers can be attached to a given event,\n"
> > + "\t allowing that event to kick off and stop aggregations on\n"
> > + "\t a host of other events. See Documentation/trace/events.txt\n"
> > + "\t for examples.\n"
> > #endif
> > ;
> >
> > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> > index e6cb781..5e2e3b0 100644
> > --- a/kernel/trace/trace.h
> > +++ b/kernel/trace/trace.h
> > @@ -1102,8 +1102,10 @@ extern const struct file_operations event_hist_fops;
> >
> > #ifdef CONFIG_HIST_TRIGGERS
> > extern int register_trigger_hist_cmd(void);
> > +extern int register_trigger_hist_enable_disable_cmds(void);
> > #else
> > static inline int register_trigger_hist_cmd(void) { return 0; }
> > +static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
> > #endif
> >
> > extern int register_trigger_cmds(void);
> > @@ -1121,6 +1123,34 @@ struct event_trigger_data {
> > struct list_head list;
> > };
> >
> > +/* Avoid typos */
> > +#define ENABLE_EVENT_STR "enable_event"
> > +#define DISABLE_EVENT_STR "disable_event"
> > +#define ENABLE_HIST_STR "enable_hist"
> > +#define DISABLE_HIST_STR "disable_hist"
> > +
> > +struct enable_trigger_data {
> > + struct trace_event_file *file;
> > + bool enable;
> > + bool hist;
> > +};
> > +
> > +extern int event_enable_trigger_print(struct seq_file *m,
> > + struct event_trigger_ops *ops,
> > + struct event_trigger_data *data);
> > +extern void event_enable_trigger_free(struct event_trigger_ops *ops,
> > + struct event_trigger_data *data);
> > +extern int event_enable_trigger_func(struct event_command *cmd_ops,
> > + struct trace_event_file *file,
> > + char *glob, char *cmd, char *param);
> > +extern int event_enable_register_trigger(char *glob,
> > + struct event_trigger_ops *ops,
> > + struct event_trigger_data *data,
> > + struct trace_event_file *file);
> > +extern void event_enable_unregister_trigger(char *glob,
> > + struct event_trigger_ops *ops,
> > + struct event_trigger_data *test,
> > + struct trace_event_file *file);
> > extern void trigger_data_free(struct event_trigger_data *data);
> > extern int event_trigger_init(struct event_trigger_ops *ops,
> > struct event_trigger_data *data);
> > @@ -1134,6 +1164,8 @@ extern int set_trigger_filter(char *filter_str,
> > struct event_trigger_data *trigger_data,
> > struct trace_event_file *file);
> > extern int register_event_command(struct event_command *cmd);
> > +extern int unregister_event_command(struct event_command *cmd);
> > +extern int register_trigger_hist_enable_disable_cmds(void);
> >
> > /**
> > * struct event_trigger_ops - callbacks for trace event triggers
> > diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> > index 4ba7645..6a43611 100644
> > --- a/kernel/trace/trace_events_hist.c
> > +++ b/kernel/trace/trace_events_hist.c
> > @@ -1345,3 +1345,118 @@ __init int register_trigger_hist_cmd(void)
> >
> > return ret;
> > }
> > +
> > +static void
> > +hist_enable_trigger(struct event_trigger_data *data, void *rec)
> > +{
> > + struct enable_trigger_data *enable_data = data->private_data;
> > + struct event_trigger_data *test;
> > +
> > + list_for_each_entry_rcu(test, &enable_data->file->triggers, list) {
> > + if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
> > + if (enable_data->enable)
> > + test->paused = false;
> > + else
> > + test->paused = true;
> > + break;
> > + }
> > + }
> > +}
> > +
> > +static void
> > +hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
> > +{
> > + if (!data->count)
> > + return;
> > +
> > + if (data->count != -1)
> > + (data->count)--;
> > +
> > + hist_enable_trigger(data, rec);
> > +}
> > +
> > +static struct event_trigger_ops hist_enable_trigger_ops = {
> > + .func = hist_enable_trigger,
> > + .print = event_enable_trigger_print,
> > + .init = event_trigger_init,
> > + .free = event_enable_trigger_free,
> > +};
> > +
> > +static struct event_trigger_ops hist_enable_count_trigger_ops = {
> > + .func = hist_enable_count_trigger,
> > + .print = event_enable_trigger_print,
> > + .init = event_trigger_init,
> > + .free = event_enable_trigger_free,
> > +};
> > +
> > +static struct event_trigger_ops hist_disable_trigger_ops = {
> > + .func = hist_enable_trigger,
> > + .print = event_enable_trigger_print,
> > + .init = event_trigger_init,
> > + .free = event_enable_trigger_free,
> > +};
> > +
> > +static struct event_trigger_ops hist_disable_count_trigger_ops = {
> > + .func = hist_enable_count_trigger,
> > + .print = event_enable_trigger_print,
> > + .init = event_trigger_init,
> > + .free = event_enable_trigger_free,
> > +};
> > +
> > +static struct event_trigger_ops *
> > +hist_enable_get_trigger_ops(char *cmd, char *param)
> > +{
> > + struct event_trigger_ops *ops;
> > + bool enable;
> > +
> > + enable = (strcmp(cmd, ENABLE_HIST_STR) == 0);
> > +
> > + if (enable)
> > + ops = param ? &hist_enable_count_trigger_ops :
> > + &hist_enable_trigger_ops;
> > + else
> > + ops = param ? &hist_disable_count_trigger_ops :
> > + &hist_disable_trigger_ops;
> > +
> > + return ops;
> > +}
> > +
> > +static struct event_command trigger_hist_enable_cmd = {
> > + .name = ENABLE_HIST_STR,
> > + .trigger_type = ETT_HIST_ENABLE,
> > + .func = event_enable_trigger_func,
> > + .reg = event_enable_register_trigger,
> > + .unreg = event_enable_unregister_trigger,
> > + .get_trigger_ops = hist_enable_get_trigger_ops,
> > + .set_filter = set_trigger_filter,
> > +};
> > +
> > +static struct event_command trigger_hist_disable_cmd = {
> > + .name = DISABLE_HIST_STR,
> > + .trigger_type = ETT_HIST_ENABLE,
> > + .func = event_enable_trigger_func,
> > + .reg = event_enable_register_trigger,
> > + .unreg = event_enable_unregister_trigger,
> > + .get_trigger_ops = hist_enable_get_trigger_ops,
> > + .set_filter = set_trigger_filter,
> > +};
> > +
> > +static __init void unregister_trigger_hist_enable_disable_cmds(void)
> > +{
> > + unregister_event_command(&trigger_hist_enable_cmd);
> > + unregister_event_command(&trigger_hist_disable_cmd);
> > +}
> > +
> > +__init int register_trigger_hist_enable_disable_cmds(void)
> > +{
> > + int ret;
> > +
> > + ret = register_event_command(&trigger_hist_enable_cmd);
> > + if (WARN_ON(ret < 0))
> > + return ret;
> > + ret = register_event_command(&trigger_hist_disable_cmd);
> > + if (WARN_ON(ret < 0))
> > + unregister_trigger_hist_enable_disable_cmds();
> > +
> > + return ret;
> > +}
> > diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
> > index e80f30b..9490d8f 100644
> > --- a/kernel/trace/trace_events_trigger.c
> > +++ b/kernel/trace/trace_events_trigger.c
> > @@ -338,7 +338,7 @@ __init int register_event_command(struct event_command *cmd)
> > * Currently we only unregister event commands from __init, so mark
> > * this __init too.
> > */
> > -static __init int unregister_event_command(struct event_command *cmd)
> > +__init int unregister_event_command(struct event_command *cmd)
> > {
> > struct event_command *p, *n;
> > int ret = -ENODEV;
> > @@ -1052,15 +1052,6 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
> > unregister_event_command(&trigger_traceoff_cmd);
> > }
> >
> > -/* Avoid typos */
> > -#define ENABLE_EVENT_STR "enable_event"
> > -#define DISABLE_EVENT_STR "disable_event"
> > -
> > -struct enable_trigger_data {
> > - struct trace_event_file *file;
> > - bool enable;
> > -};
> > -
> > static void
> > event_enable_trigger(struct event_trigger_data *data, void *rec)
> > {
> > @@ -1090,14 +1081,16 @@ event_enable_count_trigger(struct event_trigger_data *data, void *rec)
> > event_enable_trigger(data, rec);
> > }
> >
> > -static int
> > -event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> > - struct event_trigger_data *data)
> > +int event_enable_trigger_print(struct seq_file *m,
> > + struct event_trigger_ops *ops,
> > + struct event_trigger_data *data)
> > {
> > struct enable_trigger_data *enable_data = data->private_data;
> >
> > seq_printf(m, "%s:%s:%s",
> > - enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
> > + enable_data->hist ?
> > + (enable_data->enable ? ENABLE_HIST_STR : DISABLE_HIST_STR) :
> > + (enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR),
> > enable_data->file->event_call->class->system,
> > trace_event_name(enable_data->file->event_call));
> >
> > @@ -1114,9 +1107,8 @@ event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> > return 0;
> > }
> >
> > -static void
> > -event_enable_trigger_free(struct event_trigger_ops *ops,
> > - struct event_trigger_data *data)
> > +void event_enable_trigger_free(struct event_trigger_ops *ops,
> > + struct event_trigger_data *data)
> > {
> > struct enable_trigger_data *enable_data = data->private_data;
> >
> > @@ -1161,10 +1153,9 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
> > .free = event_enable_trigger_free,
> > };
> >
> > -static int
> > -event_enable_trigger_func(struct event_command *cmd_ops,
> > - struct trace_event_file *file,
> > - char *glob, char *cmd, char *param)
> > +int event_enable_trigger_func(struct event_command *cmd_ops,
> > + struct trace_event_file *file,
> > + char *glob, char *cmd, char *param)
> > {
> > struct trace_event_file *event_enable_file;
> > struct enable_trigger_data *enable_data;
> > @@ -1173,6 +1164,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> > struct trace_array *tr = file->tr;
> > const char *system;
> > const char *event;
> > + bool hist = false;
> > char *trigger;
> > char *number;
> > bool enable;
> > @@ -1197,8 +1189,15 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> > if (!event_enable_file)
> > goto out;
> >
> > - enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> > +#ifdef CONFIG_HIST_TRIGGERS
> > + hist = ((strcmp(cmd, ENABLE_HIST_STR) == 0) ||
> > + (strcmp(cmd, DISABLE_HIST_STR) == 0));
> >
> > + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
> > + (strcmp(cmd, ENABLE_HIST_STR) == 0));
> > +#else
> > + enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> > +#endif
> > trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
> >
> > ret = -ENOMEM;
> > @@ -1218,6 +1217,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> > INIT_LIST_HEAD(&trigger_data->list);
> > RCU_INIT_POINTER(trigger_data->filter, NULL);
> >
> > + enable_data->hist = hist;
> > enable_data->enable = enable;
> > enable_data->file = event_enable_file;
> > trigger_data->private_data = enable_data;
> > @@ -1295,10 +1295,10 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> > goto out;
> > }
> >
> > -static int event_enable_register_trigger(char *glob,
> > - struct event_trigger_ops *ops,
> > - struct event_trigger_data *data,
> > - struct trace_event_file *file)
> > +int event_enable_register_trigger(char *glob,
> > + struct event_trigger_ops *ops,
> > + struct event_trigger_data *data,
> > + struct trace_event_file *file)
> > {
> > struct enable_trigger_data *enable_data = data->private_data;
> > struct enable_trigger_data *test_enable_data;
> > @@ -1308,6 +1308,8 @@ static int event_enable_register_trigger(char *glob,
> > list_for_each_entry_rcu(test, &file->triggers, list) {
> > test_enable_data = test->private_data;
> > if (test_enable_data &&
> > + (test->cmd_ops->trigger_type ==
> > + data->cmd_ops->trigger_type) &&
> > (test_enable_data->file == enable_data->file)) {
> > ret = -EEXIST;
> > goto out;
> > @@ -1333,10 +1335,10 @@ out:
> > return ret;
> > }
> >
> > -static void event_enable_unregister_trigger(char *glob,
> > - struct event_trigger_ops *ops,
> > - struct event_trigger_data *test,
> > - struct trace_event_file *file)
> > +void event_enable_unregister_trigger(char *glob,
> > + struct event_trigger_ops *ops,
> > + struct event_trigger_data *test,
> > + struct trace_event_file *file)
> > {
> > struct enable_trigger_data *test_enable_data = test->private_data;
> > struct enable_trigger_data *enable_data;
> > @@ -1346,6 +1348,8 @@ static void event_enable_unregister_trigger(char *glob,
> > list_for_each_entry_rcu(data, &file->triggers, list) {
> > enable_data = data->private_data;
> > if (enable_data &&
> > + (data->cmd_ops->trigger_type ==
> > + test->cmd_ops->trigger_type) &&
> > (enable_data->file == test_enable_data->file)) {
> > unregistered = true;
> > list_del_rcu(&data->list);
> > @@ -1365,8 +1369,12 @@ event_enable_get_trigger_ops(char *cmd, char *param)
> > struct event_trigger_ops *ops;
> > bool enable;
> >
> > +#ifdef CONFIG_HIST_TRIGGERS
> > + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
> > + (strcmp(cmd, ENABLE_HIST_STR) == 0));
> > +#else
> > enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> > -
> > +#endif
> > if (enable)
> > ops = param ? &event_enable_count_trigger_ops :
> > &event_enable_trigger_ops;
> > @@ -1437,6 +1445,7 @@ __init int register_trigger_cmds(void)
> > register_trigger_snapshot_cmd();
> > register_trigger_stacktrace_cmd();
> > register_trigger_enable_disable_cmds();
> > + register_trigger_hist_enable_disable_cmds();
> > register_trigger_hist_cmd();
> >
> > return 0;
> >
>
>

Subject: Re: [PATCH v9 12/22] tracing: Add hist trigger support for pausing and continuing a trace

Hi Tom,

On 2015/07/17 2:22, Tom Zanussi wrote:
> Allow users to append 'pause' or 'continue' to an existing trigger in
> order to have it paused or to have a paused trace continue.
>
> This expands the hist trigger syntax from this:
> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending \
> [ if filter] > event/trigger
>
> to this:
>
> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause or cont \
> [ if filter] > event/trigger

Since the only one hist trigger can be set on one event, it seems
that we don't need keys for pause/cont/clear (e.g. hist:pause is enough).
Anyway, I've found an odd behavior.

[root@localhost tracing]# echo 'hist:keys=parent_pid' > events/sched/sched_process_fork/trigger
[root@localhost tracing]# echo 'hist:keys=common_pid:pause' > events/sched/sched_process_fork/trigger
[root@localhost tracing]# cat events/sched/sched_process_fork/trigger
hist:keys=parent_pid:vals=hitcount:sort=hitcount:size=2048 [paused]

So, the second "pause" command can work with different keys.
Moreover, I can remove it with different keys.

[root@localhost tracing]# echo '!hist:keys=child_pid' > events/sched/sched_process_fork/trigger
[root@localhost tracing]# cat events/sched/sched_process_fork/trigger
# Available triggers:
# traceon traceoff snapshot stacktrace enable_event disable_event enable_hist disable_hist hist

Thank you,

>
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> kernel/trace/trace.c | 5 +++++
> kernel/trace/trace_events_hist.c | 26 +++++++++++++++++++++++---
> 2 files changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 5dd1fc4..547bbc8 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -3791,6 +3791,7 @@ static const char readme_msg[] =
> "\t [:values=<field1[,field2,...]]\n"
> "\t [:sort=field1,field2,...]\n"
> "\t [:size=#entries]\n"
> + "\t [:pause][:continue]\n"
> "\t [if <filter>]\n\n"
> "\t When a matching event is hit, an entry is added to a hash\n"
> "\t table using the key(s) and value(s) named. Keys and values\n"
> @@ -3821,6 +3822,10 @@ static const char readme_msg[] =
> "\t on. The default if unspecified is 'hitcount' and the.\n"
> "\t default sort order is 'ascending'. To sort in the opposite\n"
> "\t direction, append .descending' to the sort key.\n\n"
> + "\t The 'pause' param can be used to pause an existing hist\n"
> + "\t trigger or to start a hist trigger but not log any events\n"
> + "\t until told to do so. 'continue' can be used to start or\n"
> + "\t restart a paused hist trigger.\n\n"
> #endif
> ;
>
> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> index 6bf224f..3ae58e7 100644
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -78,6 +78,8 @@ struct hist_trigger_attrs {
> char *keys_str;
> char *vals_str;
> char *sort_key_str;
> + bool pause;
> + bool cont;
> unsigned int map_bits;
> };
>
> @@ -184,6 +186,11 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
> attrs->vals_str = kstrdup(str, GFP_KERNEL);
> else if (!strncmp(str, "sort", strlen("sort")))
> attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
> + else if (!strncmp(str, "pause", strlen("pause")))
> + attrs->pause = true;
> + else if (!strncmp(str, "continue", strlen("continue")) ||
> + !strncmp(str, "cont", strlen("cont")))
> + attrs->cont = true;
> else if (!strncmp(str, "size", strlen("size"))) {
> int map_bits = parse_map_size(str);
>
> @@ -843,7 +850,10 @@ static int event_hist_trigger_print(struct seq_file *m,
> if (data->filter_str)
> seq_printf(m, " if %s", data->filter_str);
>
> - seq_puts(m, " [active]");
> + if (data->paused)
> + seq_puts(m, " [paused]");
> + else
> + seq_puts(m, " [active]");
>
> seq_putc(m, '\n');
>
> @@ -882,16 +892,25 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> struct event_trigger_data *data,
> struct trace_event_file *file)
> {
> + struct hist_trigger_data *hist_data = data->private_data;
> struct event_trigger_data *test;
> int ret = 0;
>
> list_for_each_entry_rcu(test, &file->triggers, list) {
> if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
> - ret = -EEXIST;
> + if (hist_data->attrs->pause)
> + test->paused = true;
> + else if (hist_data->attrs->cont)
> + test->paused = false;
> + else
> + ret = -EEXIST;
> goto out;
> }
> }
>
> + if (hist_data->attrs->pause)
> + data->paused = true;
> +
> if (data->ops->init) {
> ret = data->ops->init(data->ops, data);
> if (ret < 0)
> @@ -984,7 +1003,8 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
> * triggers registered a failure too.
> */
> if (!ret) {
> - ret = -ENOENT;
> + if (!(attrs->pause || attrs->cont))
> + ret = -ENOENT;
> goto out_free;
> } else if (ret < 0)
> goto out_free;
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

Subject: Re: [PATCH v9 13/22] tracing: Add hist trigger support for clearing a trace

On 2015/07/17 2:22, Tom Zanussi wrote:
> Allow users to append 'clear' to an existing trigger in order to have
> the hash table cleared.
>
> This expands the hist trigger syntax from this:
> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause/cont \
> [ if filter] > event/trigger
>
> to this:
>
> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause/cont/clear \
> [ if filter] > event/trigger

By the way, since pause/cont/clear is not the trigger but the commands
(which executed immediately), I think it should be a write fops of
"hist" special file.

e.g. to clear the histogram, write 0 to hist

# echo 0 > event/hist

And pause/cont will be 1 and 2.

# echo 1 > event/hist <- pause
and
# echo 2 > event/hist <- continue

What would you think ?

Thanks,


>
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> kernel/trace/trace.c | 4 +++-
> kernel/trace/trace_events_hist.c | 25 ++++++++++++++++++++++++-
> 2 files changed, 27 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 547bbc8..27daa28 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -3791,7 +3791,7 @@ static const char readme_msg[] =
> "\t [:values=<field1[,field2,...]]\n"
> "\t [:sort=field1,field2,...]\n"
> "\t [:size=#entries]\n"
> - "\t [:pause][:continue]\n"
> + "\t [:pause][:continue][:clear]\n"
> "\t [if <filter>]\n\n"
> "\t When a matching event is hit, an entry is added to a hash\n"
> "\t table using the key(s) and value(s) named. Keys and values\n"
> @@ -3826,6 +3826,8 @@ static const char readme_msg[] =
> "\t trigger or to start a hist trigger but not log any events\n"
> "\t until told to do so. 'continue' can be used to start or\n"
> "\t restart a paused hist trigger.\n\n"
> + "\t The 'clear' param will clear the contents of a running hist\n"
> + "\t trigger and leave its current paused/active state.\n\n"
> #endif
> ;
>
> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> index 3ae58e7..d8259fe 100644
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -80,6 +80,7 @@ struct hist_trigger_attrs {
> char *sort_key_str;
> bool pause;
> bool cont;
> + bool clear;
> unsigned int map_bits;
> };
>
> @@ -188,6 +189,8 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
> attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
> else if (!strncmp(str, "pause", strlen("pause")))
> attrs->pause = true;
> + else if (!strncmp(str, "clear", strlen("clear")))
> + attrs->clear = true;
> else if (!strncmp(str, "continue", strlen("continue")) ||
> !strncmp(str, "cont", strlen("cont")))
> attrs->cont = true;
> @@ -888,6 +891,24 @@ static struct event_trigger_ops *event_hist_get_trigger_ops(char *cmd,
> return &event_hist_trigger_ops;
> }
>
> +static void hist_clear(struct event_trigger_data *data)
> +{
> + struct hist_trigger_data *hist_data = data->private_data;
> + bool paused;
> +
> + paused = data->paused;
> + data->paused = true;
> +
> + synchronize_sched();
> +
> + tracing_map_clear(hist_data->map);
> +
> + atomic64_set(&hist_data->total_hits, 0);
> + atomic64_set(&hist_data->drops, 0);
> +
> + data->paused = paused;
> +}
> +
> static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> struct event_trigger_data *data,
> struct trace_event_file *file)
> @@ -902,6 +923,8 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> test->paused = true;
> else if (hist_data->attrs->cont)
> test->paused = false;
> + else if (hist_data->attrs->clear)
> + hist_clear(test);
> else
> ret = -EEXIST;
> goto out;
> @@ -1003,7 +1026,7 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
> * triggers registered a failure too.
> */
> if (!ret) {
> - if (!(attrs->pause || attrs->cont))
> + if (!(attrs->pause || attrs->cont || attrs->clear))
> ret = -ENOENT;
> goto out_free;
> } else if (ret < 0)
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

Subject: Re: [PATCH v9 21/22] tracing: Add enable_hist/disable_hist triggers

Hi Tom,

On 2015/07/22 1:10, Tom Zanussi wrote:
> Hi Masami,
>
> On Mon, 2015-07-20 at 23:57 +0900, Masami Hiramatsu wrote:
>> On 2015/07/17 2:22, Tom Zanussi wrote:
>>> Similar to enable_event/disable_event triggers, these triggers enable
>>> and disable the aggregation of events into maps rather than enabling
>>> and disabling their writing into the trace buffer.
>>>
>>> They can be used to automatically start and stop hist triggers based
>>> on a matching filter condition.
>>>
>>> If there's a paused hist trigger on system:event, the following would
>>> start it when the filter condition was hit:
>>>
>>> # echo enable_hist:system:event [ if filter] > event/trigger
>>>
>>> And the following would disable a running system:event hist trigger:
>>>
>>> # echo disable_hist:system:event [ if filter] > event/trigger
>>>
>>> See Documentation/trace/events.txt for real examples.
>>
>> Hmm, do we really need this? Since we've already had multiple instances,
>> if someone wants to make histogram separated from event logger, he/she can
>> make another instance for that, and disable/enable event itself.
>>
>> I'm considering if we accept this methods, we'll need to accept another
>> enable/disable triggers for each action too in the future.
>>
>
> OK, I haven't implemented multiple instances yet, but if I understand
> you correctly, what you're suggesting is that we can accomplish the same
> thing, by setting up a disabled histogram on system:event and then
> simply using the existing enable_event:system:event trigger to turn it
> on. Likewise the opposite to disable it.

At first I must apologies to be confused, I forgot that the event
trigger is always enabled (activated) even if the event itself is disabled.
And, I meant that the instance of ftrace, ring-buffers and events, so
hist trigger already supports instances.

e.g.
# mkdir instances/foo
# echo hist:key=common_pid > instances/foo/events/sched/sched_process_fork/trigger
# cat events/sched/sched_process_fork/trigger
# Available triggers:
# traceon traceoff snapshot stacktrace enable_event disable_event enable_hist disable_hist hist

So I've thought if the hist trigger is activated *only when* the event is
enabled, we just need the enable/disable_event trigger.

> I guess we need to add the histogram instance name to the syntax of
> enable_event:system:event to be able to make the distinction between a
> histogram and the current behavior of enabling logging.
>
> So here's what we currently have. This sets up a histogram and starts
> it running, and the user cats event/hist to get the results:
>
> # echo hist:keys=xxx > event1/trigger
> # cat event1/hist
>
> And separately the existing enable_event trigger, which enables event1
> (starts it logging to the event logger, and has nothing to do with
> histograms) when event2 is hit:
>
> # echo enable_event:system:event1 > event2/trigger
>
> So to extend enable_event to support histograms, we need to be able to
> do this, first set up a paused histogram:
>
> # echo hist:keys=xxx:pause > event1/trigger
> # cat event1/hist
>
> Which would be enabled via enable_event like this:
>
> # echo enable_event:system:event1:hist > event2/trigger

Thanks, but I agree that your current approach, enable_hist, is better than this,
since the hist trigger is a special one.

> Of course 'hist' refers to the initial single-instance histogram - if we
> had multiple instances, 'hist' would be replaced by the instance name
> e.g. to set up two different histograms, each with a different filter:
>
> # echo hist:keys=xxx:pause if filter1 > event1/trigger
> # cat event1/hist
>
> # echo hist:keys=xxx:pause if filter2 > event1/trigger
> # cat event1/hist2

As I've said, I meant the 'instance' as ftrace's instance.
However, it seems that the multiple instance should have a unique name
(on same ftrace instance :)), so I expect followings.

# echo hist:name=foo:keys=xxx if filter1 > event1/trigger
# echo hist:name=bar:keys=yyy if filter2 > event1/trigger

And hist file shows all histograms on the event.

# cat event1/hist
# trigger info: hist:name=foo:keys=xxx:vals=hitcount:sort=hitcount:size=2048 [active]
....
# trigger info: hist:name=bar:keys=xxx:vals=hitcount:sort=hitcount:size=2048 [active]
....

And if we reuse an instance on another event,

# echo hist:name=foo:keys=xxx if filter3 > event2/trigger

Of course we cannot mix the single key and the compound keys on same event,
nor different modifiers nor different types (e.g. string v.s. digit).

Thank you,

>
> To enable the first histogram when event2 is hit:
>
> # echo enable_event:system:event1:hist > event2/trigger
>
> And to enable the second histogram when event2 is hit:
>
> # echo enable_event:system:event1:hist2 > event2/trigger
>
> Does that align with what you were thinking regarding both instances and
> the enable/disable_event triggers? If not, some more explanation and
> examples would help ;-)
>
> Thanks,
>
> Tom
>
>> Thank you,
>>
>>>
>>> Signed-off-by: Tom Zanussi <[email protected]>
>>> ---
>>> include/linux/trace_events.h | 1 +
>>> kernel/trace/trace.c | 11 ++++
>>> kernel/trace/trace.h | 32 ++++++++++
>>> kernel/trace/trace_events_hist.c | 115 ++++++++++++++++++++++++++++++++++++
>>> kernel/trace/trace_events_trigger.c | 71 ++++++++++++----------
>>> 5 files changed, 199 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
>>> index 0faf48b..0f3ffdd 100644
>>> --- a/include/linux/trace_events.h
>>> +++ b/include/linux/trace_events.h
>>> @@ -411,6 +411,7 @@ enum event_trigger_type {
>>> ETT_STACKTRACE = (1 << 2),
>>> ETT_EVENT_ENABLE = (1 << 3),
>>> ETT_EVENT_HIST = (1 << 4),
>>> + ETT_HIST_ENABLE = (1 << 5),
>>> };
>>>
>>> extern int filter_match_preds(struct event_filter *filter, void *rec);
>>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
>>> index 16c64a2..c581750 100644
>>> --- a/kernel/trace/trace.c
>>> +++ b/kernel/trace/trace.c
>>> @@ -3761,6 +3761,10 @@ static const char readme_msg[] =
>>> "\t trigger: traceon, traceoff\n"
>>> "\t enable_event:<system>:<event>\n"
>>> "\t disable_event:<system>:<event>\n"
>>> +#ifdef CONFIG_HIST_TRIGGERS
>>> + "\t enable_hist:<system>:<event>\n"
>>> + "\t disable_hist:<system>:<event>\n"
>>> +#endif
>>> #ifdef CONFIG_STACKTRACE
>>> "\t\t stacktrace\n"
>>> #endif
>>> @@ -3836,6 +3840,13 @@ static const char readme_msg[] =
>>> "\t restart a paused hist trigger.\n\n"
>>> "\t The 'clear' param will clear the contents of a running hist\n"
>>> "\t trigger and leave its current paused/active state.\n\n"
>>> + "\t The enable_hist and disable_hist triggers can be used to\n"
>>> + "\t have one event conditionally start and stop another event's\n"
>>> + "\t already-attached hist trigger. Any number of enable_hist\n"
>>> + "\t and disable_hist triggers can be attached to a given event,\n"
>>> + "\t allowing that event to kick off and stop aggregations on\n"
>>> + "\t a host of other events. See Documentation/trace/events.txt\n"
>>> + "\t for examples.\n"
>>> #endif
>>> ;
>>>
>>> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
>>> index e6cb781..5e2e3b0 100644
>>> --- a/kernel/trace/trace.h
>>> +++ b/kernel/trace/trace.h
>>> @@ -1102,8 +1102,10 @@ extern const struct file_operations event_hist_fops;
>>>
>>> #ifdef CONFIG_HIST_TRIGGERS
>>> extern int register_trigger_hist_cmd(void);
>>> +extern int register_trigger_hist_enable_disable_cmds(void);
>>> #else
>>> static inline int register_trigger_hist_cmd(void) { return 0; }
>>> +static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
>>> #endif
>>>
>>> extern int register_trigger_cmds(void);
>>> @@ -1121,6 +1123,34 @@ struct event_trigger_data {
>>> struct list_head list;
>>> };
>>>
>>> +/* Avoid typos */
>>> +#define ENABLE_EVENT_STR "enable_event"
>>> +#define DISABLE_EVENT_STR "disable_event"
>>> +#define ENABLE_HIST_STR "enable_hist"
>>> +#define DISABLE_HIST_STR "disable_hist"
>>> +
>>> +struct enable_trigger_data {
>>> + struct trace_event_file *file;
>>> + bool enable;
>>> + bool hist;
>>> +};
>>> +
>>> +extern int event_enable_trigger_print(struct seq_file *m,
>>> + struct event_trigger_ops *ops,
>>> + struct event_trigger_data *data);
>>> +extern void event_enable_trigger_free(struct event_trigger_ops *ops,
>>> + struct event_trigger_data *data);
>>> +extern int event_enable_trigger_func(struct event_command *cmd_ops,
>>> + struct trace_event_file *file,
>>> + char *glob, char *cmd, char *param);
>>> +extern int event_enable_register_trigger(char *glob,
>>> + struct event_trigger_ops *ops,
>>> + struct event_trigger_data *data,
>>> + struct trace_event_file *file);
>>> +extern void event_enable_unregister_trigger(char *glob,
>>> + struct event_trigger_ops *ops,
>>> + struct event_trigger_data *test,
>>> + struct trace_event_file *file);
>>> extern void trigger_data_free(struct event_trigger_data *data);
>>> extern int event_trigger_init(struct event_trigger_ops *ops,
>>> struct event_trigger_data *data);
>>> @@ -1134,6 +1164,8 @@ extern int set_trigger_filter(char *filter_str,
>>> struct event_trigger_data *trigger_data,
>>> struct trace_event_file *file);
>>> extern int register_event_command(struct event_command *cmd);
>>> +extern int unregister_event_command(struct event_command *cmd);
>>> +extern int register_trigger_hist_enable_disable_cmds(void);
>>>
>>> /**
>>> * struct event_trigger_ops - callbacks for trace event triggers
>>> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
>>> index 4ba7645..6a43611 100644
>>> --- a/kernel/trace/trace_events_hist.c
>>> +++ b/kernel/trace/trace_events_hist.c
>>> @@ -1345,3 +1345,118 @@ __init int register_trigger_hist_cmd(void)
>>>
>>> return ret;
>>> }
>>> +
>>> +static void
>>> +hist_enable_trigger(struct event_trigger_data *data, void *rec)
>>> +{
>>> + struct enable_trigger_data *enable_data = data->private_data;
>>> + struct event_trigger_data *test;
>>> +
>>> + list_for_each_entry_rcu(test, &enable_data->file->triggers, list) {
>>> + if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
>>> + if (enable_data->enable)
>>> + test->paused = false;
>>> + else
>>> + test->paused = true;
>>> + break;
>>> + }
>>> + }
>>> +}
>>> +
>>> +static void
>>> +hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
>>> +{
>>> + if (!data->count)
>>> + return;
>>> +
>>> + if (data->count != -1)
>>> + (data->count)--;
>>> +
>>> + hist_enable_trigger(data, rec);
>>> +}
>>> +
>>> +static struct event_trigger_ops hist_enable_trigger_ops = {
>>> + .func = hist_enable_trigger,
>>> + .print = event_enable_trigger_print,
>>> + .init = event_trigger_init,
>>> + .free = event_enable_trigger_free,
>>> +};
>>> +
>>> +static struct event_trigger_ops hist_enable_count_trigger_ops = {
>>> + .func = hist_enable_count_trigger,
>>> + .print = event_enable_trigger_print,
>>> + .init = event_trigger_init,
>>> + .free = event_enable_trigger_free,
>>> +};
>>> +
>>> +static struct event_trigger_ops hist_disable_trigger_ops = {
>>> + .func = hist_enable_trigger,
>>> + .print = event_enable_trigger_print,
>>> + .init = event_trigger_init,
>>> + .free = event_enable_trigger_free,
>>> +};
>>> +
>>> +static struct event_trigger_ops hist_disable_count_trigger_ops = {
>>> + .func = hist_enable_count_trigger,
>>> + .print = event_enable_trigger_print,
>>> + .init = event_trigger_init,
>>> + .free = event_enable_trigger_free,
>>> +};
>>> +
>>> +static struct event_trigger_ops *
>>> +hist_enable_get_trigger_ops(char *cmd, char *param)
>>> +{
>>> + struct event_trigger_ops *ops;
>>> + bool enable;
>>> +
>>> + enable = (strcmp(cmd, ENABLE_HIST_STR) == 0);
>>> +
>>> + if (enable)
>>> + ops = param ? &hist_enable_count_trigger_ops :
>>> + &hist_enable_trigger_ops;
>>> + else
>>> + ops = param ? &hist_disable_count_trigger_ops :
>>> + &hist_disable_trigger_ops;
>>> +
>>> + return ops;
>>> +}
>>> +
>>> +static struct event_command trigger_hist_enable_cmd = {
>>> + .name = ENABLE_HIST_STR,
>>> + .trigger_type = ETT_HIST_ENABLE,
>>> + .func = event_enable_trigger_func,
>>> + .reg = event_enable_register_trigger,
>>> + .unreg = event_enable_unregister_trigger,
>>> + .get_trigger_ops = hist_enable_get_trigger_ops,
>>> + .set_filter = set_trigger_filter,
>>> +};
>>> +
>>> +static struct event_command trigger_hist_disable_cmd = {
>>> + .name = DISABLE_HIST_STR,
>>> + .trigger_type = ETT_HIST_ENABLE,
>>> + .func = event_enable_trigger_func,
>>> + .reg = event_enable_register_trigger,
>>> + .unreg = event_enable_unregister_trigger,
>>> + .get_trigger_ops = hist_enable_get_trigger_ops,
>>> + .set_filter = set_trigger_filter,
>>> +};
>>> +
>>> +static __init void unregister_trigger_hist_enable_disable_cmds(void)
>>> +{
>>> + unregister_event_command(&trigger_hist_enable_cmd);
>>> + unregister_event_command(&trigger_hist_disable_cmd);
>>> +}
>>> +
>>> +__init int register_trigger_hist_enable_disable_cmds(void)
>>> +{
>>> + int ret;
>>> +
>>> + ret = register_event_command(&trigger_hist_enable_cmd);
>>> + if (WARN_ON(ret < 0))
>>> + return ret;
>>> + ret = register_event_command(&trigger_hist_disable_cmd);
>>> + if (WARN_ON(ret < 0))
>>> + unregister_trigger_hist_enable_disable_cmds();
>>> +
>>> + return ret;
>>> +}
>>> diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
>>> index e80f30b..9490d8f 100644
>>> --- a/kernel/trace/trace_events_trigger.c
>>> +++ b/kernel/trace/trace_events_trigger.c
>>> @@ -338,7 +338,7 @@ __init int register_event_command(struct event_command *cmd)
>>> * Currently we only unregister event commands from __init, so mark
>>> * this __init too.
>>> */
>>> -static __init int unregister_event_command(struct event_command *cmd)
>>> +__init int unregister_event_command(struct event_command *cmd)
>>> {
>>> struct event_command *p, *n;
>>> int ret = -ENODEV;
>>> @@ -1052,15 +1052,6 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
>>> unregister_event_command(&trigger_traceoff_cmd);
>>> }
>>>
>>> -/* Avoid typos */
>>> -#define ENABLE_EVENT_STR "enable_event"
>>> -#define DISABLE_EVENT_STR "disable_event"
>>> -
>>> -struct enable_trigger_data {
>>> - struct trace_event_file *file;
>>> - bool enable;
>>> -};
>>> -
>>> static void
>>> event_enable_trigger(struct event_trigger_data *data, void *rec)
>>> {
>>> @@ -1090,14 +1081,16 @@ event_enable_count_trigger(struct event_trigger_data *data, void *rec)
>>> event_enable_trigger(data, rec);
>>> }
>>>
>>> -static int
>>> -event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
>>> - struct event_trigger_data *data)
>>> +int event_enable_trigger_print(struct seq_file *m,
>>> + struct event_trigger_ops *ops,
>>> + struct event_trigger_data *data)
>>> {
>>> struct enable_trigger_data *enable_data = data->private_data;
>>>
>>> seq_printf(m, "%s:%s:%s",
>>> - enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
>>> + enable_data->hist ?
>>> + (enable_data->enable ? ENABLE_HIST_STR : DISABLE_HIST_STR) :
>>> + (enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR),
>>> enable_data->file->event_call->class->system,
>>> trace_event_name(enable_data->file->event_call));
>>>
>>> @@ -1114,9 +1107,8 @@ event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
>>> return 0;
>>> }
>>>
>>> -static void
>>> -event_enable_trigger_free(struct event_trigger_ops *ops,
>>> - struct event_trigger_data *data)
>>> +void event_enable_trigger_free(struct event_trigger_ops *ops,
>>> + struct event_trigger_data *data)
>>> {
>>> struct enable_trigger_data *enable_data = data->private_data;
>>>
>>> @@ -1161,10 +1153,9 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
>>> .free = event_enable_trigger_free,
>>> };
>>>
>>> -static int
>>> -event_enable_trigger_func(struct event_command *cmd_ops,
>>> - struct trace_event_file *file,
>>> - char *glob, char *cmd, char *param)
>>> +int event_enable_trigger_func(struct event_command *cmd_ops,
>>> + struct trace_event_file *file,
>>> + char *glob, char *cmd, char *param)
>>> {
>>> struct trace_event_file *event_enable_file;
>>> struct enable_trigger_data *enable_data;
>>> @@ -1173,6 +1164,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
>>> struct trace_array *tr = file->tr;
>>> const char *system;
>>> const char *event;
>>> + bool hist = false;
>>> char *trigger;
>>> char *number;
>>> bool enable;
>>> @@ -1197,8 +1189,15 @@ event_enable_trigger_func(struct event_command *cmd_ops,
>>> if (!event_enable_file)
>>> goto out;
>>>
>>> - enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
>>> +#ifdef CONFIG_HIST_TRIGGERS
>>> + hist = ((strcmp(cmd, ENABLE_HIST_STR) == 0) ||
>>> + (strcmp(cmd, DISABLE_HIST_STR) == 0));
>>>
>>> + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
>>> + (strcmp(cmd, ENABLE_HIST_STR) == 0));
>>> +#else
>>> + enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
>>> +#endif
>>> trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
>>>
>>> ret = -ENOMEM;
>>> @@ -1218,6 +1217,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
>>> INIT_LIST_HEAD(&trigger_data->list);
>>> RCU_INIT_POINTER(trigger_data->filter, NULL);
>>>
>>> + enable_data->hist = hist;
>>> enable_data->enable = enable;
>>> enable_data->file = event_enable_file;
>>> trigger_data->private_data = enable_data;
>>> @@ -1295,10 +1295,10 @@ event_enable_trigger_func(struct event_command *cmd_ops,
>>> goto out;
>>> }
>>>
>>> -static int event_enable_register_trigger(char *glob,
>>> - struct event_trigger_ops *ops,
>>> - struct event_trigger_data *data,
>>> - struct trace_event_file *file)
>>> +int event_enable_register_trigger(char *glob,
>>> + struct event_trigger_ops *ops,
>>> + struct event_trigger_data *data,
>>> + struct trace_event_file *file)
>>> {
>>> struct enable_trigger_data *enable_data = data->private_data;
>>> struct enable_trigger_data *test_enable_data;
>>> @@ -1308,6 +1308,8 @@ static int event_enable_register_trigger(char *glob,
>>> list_for_each_entry_rcu(test, &file->triggers, list) {
>>> test_enable_data = test->private_data;
>>> if (test_enable_data &&
>>> + (test->cmd_ops->trigger_type ==
>>> + data->cmd_ops->trigger_type) &&
>>> (test_enable_data->file == enable_data->file)) {
>>> ret = -EEXIST;
>>> goto out;
>>> @@ -1333,10 +1335,10 @@ out:
>>> return ret;
>>> }
>>>
>>> -static void event_enable_unregister_trigger(char *glob,
>>> - struct event_trigger_ops *ops,
>>> - struct event_trigger_data *test,
>>> - struct trace_event_file *file)
>>> +void event_enable_unregister_trigger(char *glob,
>>> + struct event_trigger_ops *ops,
>>> + struct event_trigger_data *test,
>>> + struct trace_event_file *file)
>>> {
>>> struct enable_trigger_data *test_enable_data = test->private_data;
>>> struct enable_trigger_data *enable_data;
>>> @@ -1346,6 +1348,8 @@ static void event_enable_unregister_trigger(char *glob,
>>> list_for_each_entry_rcu(data, &file->triggers, list) {
>>> enable_data = data->private_data;
>>> if (enable_data &&
>>> + (data->cmd_ops->trigger_type ==
>>> + test->cmd_ops->trigger_type) &&
>>> (enable_data->file == test_enable_data->file)) {
>>> unregistered = true;
>>> list_del_rcu(&data->list);
>>> @@ -1365,8 +1369,12 @@ event_enable_get_trigger_ops(char *cmd, char *param)
>>> struct event_trigger_ops *ops;
>>> bool enable;
>>>
>>> +#ifdef CONFIG_HIST_TRIGGERS
>>> + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
>>> + (strcmp(cmd, ENABLE_HIST_STR) == 0));
>>> +#else
>>> enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
>>> -
>>> +#endif
>>> if (enable)
>>> ops = param ? &event_enable_count_trigger_ops :
>>> &event_enable_trigger_ops;
>>> @@ -1437,6 +1445,7 @@ __init int register_trigger_cmds(void)
>>> register_trigger_snapshot_cmd();
>>> register_trigger_stacktrace_cmd();
>>> register_trigger_enable_disable_cmds();
>>> + register_trigger_hist_enable_disable_cmds();
>>> register_trigger_hist_cmd();
>>>
>>> return 0;
>>>
>>
>>
>
>
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

2015-07-22 18:45:39

by Brendan Gregg

[permalink] [raw]
Subject: Re: [PATCH v9 00/22] tracing: 'hist' triggers

G'Day Tom,

On Thu, Jul 16, 2015 at 10:22 AM, Tom Zanussi
<[email protected]> wrote:
>
> This is v9 of the 'hist triggers' patchset.
>
[...]

I've browsed the functionality (sorry, catching up), and it looks like
this will solve a number of common problems. But it seems
tantalizingly close to solving a few more. These may already be on
your future todo list.

A) CPU stack profiling

Kernel stacktrace as a key will be hugely useful; is there a way to
enable this for a sampling profile? (eg, what perf record -F 99 does).
I take CPU profiles daily, and would prefer to aggregate stacks
in-kernel. Also, would like user stacktrace as a key (even if it's
just the hex).

B) Key buckets

Eg, imagine:

echo 'hist:keys=common_pid.execname,count.log2:val=count' >
/sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger

to get a log2 bucketized histogram of syscall read request size. Same
for any value where using the value as a key gets too verbose, and you
just want a rough look at the distribution. (Would make it more
readable if it could also be sorted by the log2 value.)

C) Latency as a bucket key

With kprobes, we could then have a log2 histogram of any function call
latency, collected efficiently. (There's already the function timers
in ftrace, which I'm using from function_graph with filters sets to
only match the target function.)

... Those are the other common use cases, that the hist functionality
seemed suited for. Beyond that gets more custom, and we can use eBPF.

Brendan

2015-07-22 20:18:40

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 21/22] tracing: Add enable_hist/disable_hist triggers

Hi Masami,

On Wed, 2015-07-22 at 23:21 +0900, Masami Hiramatsu wrote:
> Hi Tom,
>
> On 2015/07/22 1:10, Tom Zanussi wrote:
> > Hi Masami,
> >
> > On Mon, 2015-07-20 at 23:57 +0900, Masami Hiramatsu wrote:
> >> On 2015/07/17 2:22, Tom Zanussi wrote:
> >>> Similar to enable_event/disable_event triggers, these triggers enable
> >>> and disable the aggregation of events into maps rather than enabling
> >>> and disabling their writing into the trace buffer.
> >>>
> >>> They can be used to automatically start and stop hist triggers based
> >>> on a matching filter condition.
> >>>
> >>> If there's a paused hist trigger on system:event, the following would
> >>> start it when the filter condition was hit:
> >>>
> >>> # echo enable_hist:system:event [ if filter] > event/trigger
> >>>
> >>> And the following would disable a running system:event hist trigger:
> >>>
> >>> # echo disable_hist:system:event [ if filter] > event/trigger
> >>>
> >>> See Documentation/trace/events.txt for real examples.
> >>
> >> Hmm, do we really need this? Since we've already had multiple instances,
> >> if someone wants to make histogram separated from event logger, he/she can
> >> make another instance for that, and disable/enable event itself.
> >>
> >> I'm considering if we accept this methods, we'll need to accept another
> >> enable/disable triggers for each action too in the future.
> >>
> >
> > OK, I haven't implemented multiple instances yet, but if I understand
> > you correctly, what you're suggesting is that we can accomplish the same
> > thing, by setting up a disabled histogram on system:event and then
> > simply using the existing enable_event:system:event trigger to turn it
> > on. Likewise the opposite to disable it.
>
> At first I must apologies to be confused, I forgot that the event
> trigger is always enabled (activated) even if the event itself is disabled.
> And, I meant that the instance of ftrace, ring-buffers and events, so
> hist trigger already supports instances.
>
> e.g.
> # mkdir instances/foo
> # echo hist:key=common_pid > instances/foo/events/sched/sched_process_fork/trigger
> # cat events/sched/sched_process_fork/trigger
> # Available triggers:
> # traceon traceoff snapshot stacktrace enable_event disable_event enable_hist disable_hist hist
>
> So I've thought if the hist trigger is activated *only when* the event is
> enabled, we just need the enable/disable_event trigger.
>

OK.

> > I guess we need to add the histogram instance name to the syntax of
> > enable_event:system:event to be able to make the distinction between a
> > histogram and the current behavior of enabling logging.
> >
> > So here's what we currently have. This sets up a histogram and starts
> > it running, and the user cats event/hist to get the results:
> >
> > # echo hist:keys=xxx > event1/trigger
> > # cat event1/hist
> >
> > And separately the existing enable_event trigger, which enables event1
> > (starts it logging to the event logger, and has nothing to do with
> > histograms) when event2 is hit:
> >
> > # echo enable_event:system:event1 > event2/trigger
> >
> > So to extend enable_event to support histograms, we need to be able to
> > do this, first set up a paused histogram:
> >
> > # echo hist:keys=xxx:pause > event1/trigger
> > # cat event1/hist
> >
> > Which would be enabled via enable_event like this:
> >
> > # echo enable_event:system:event1:hist > event2/trigger
>
> Thanks, but I agree that your current approach, enable_hist, is better than this,
> since the hist trigger is a special one.
>

OK, I'll keep it then.

> > Of course 'hist' refers to the initial single-instance histogram - if we
> > had multiple instances, 'hist' would be replaced by the instance name
> > e.g. to set up two different histograms, each with a different filter:
> >
> > # echo hist:keys=xxx:pause if filter1 > event1/trigger
> > # cat event1/hist
> >
> > # echo hist:keys=xxx:pause if filter2 > event1/trigger
> > # cat event1/hist2
>
> As I've said, I meant the 'instance' as ftrace's instance.
> However, it seems that the multiple instance should have a unique name
> (on same ftrace instance :)), so I expect followings.
>
> # echo hist:name=foo:keys=xxx if filter1 > event1/trigger
> # echo hist:name=bar:keys=yyy if filter2 > event1/trigger
>

Right, and because we need to control each of these independently e.g.
delete/pause/continue, we'll need to start paying attention to the key
and filter when performing those operations.

> And hist file shows all histograms on the event.
>
> # cat event1/hist
> # trigger info: hist:name=foo:keys=xxx:vals=hitcount:sort=hitcount:size=2048 [active]
> ....
> # trigger info: hist:name=bar:keys=xxx:vals=hitcount:sort=hitcount:size=2048 [active]
> ....
>

That makes the implementation easier - just one hist file per event to
worry about.

> And if we reuse an instance on another event,
>
> # echo hist:name=foo:keys=xxx if filter3 > event2/trigger
>
> Of course we cannot mix the single key and the compound keys on same event,
> nor different modifiers nor different types (e.g. string v.s. digit).
>

So in the case of reusing an instance of foo between multiple events, I
guess the hist file for each event shows the foo hist output?

Thanks,

Tom

> Thank you,
>
> >
> > To enable the first histogram when event2 is hit:
> >
> > # echo enable_event:system:event1:hist > event2/trigger
> >
> > And to enable the second histogram when event2 is hit:
> >
> > # echo enable_event:system:event1:hist2 > event2/trigger
> >
> > Does that align with what you were thinking regarding both instances and
> > the enable/disable_event triggers? If not, some more explanation and
> > examples would help ;-)
> >
> > Thanks,
> >
> > Tom
> >
> >> Thank you,
> >>
> >>>
> >>> Signed-off-by: Tom Zanussi <[email protected]>
> >>> ---
> >>> include/linux/trace_events.h | 1 +
> >>> kernel/trace/trace.c | 11 ++++
> >>> kernel/trace/trace.h | 32 ++++++++++
> >>> kernel/trace/trace_events_hist.c | 115 ++++++++++++++++++++++++++++++++++++
> >>> kernel/trace/trace_events_trigger.c | 71 ++++++++++++----------
> >>> 5 files changed, 199 insertions(+), 31 deletions(-)
> >>>
> >>> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> >>> index 0faf48b..0f3ffdd 100644
> >>> --- a/include/linux/trace_events.h
> >>> +++ b/include/linux/trace_events.h
> >>> @@ -411,6 +411,7 @@ enum event_trigger_type {
> >>> ETT_STACKTRACE = (1 << 2),
> >>> ETT_EVENT_ENABLE = (1 << 3),
> >>> ETT_EVENT_HIST = (1 << 4),
> >>> + ETT_HIST_ENABLE = (1 << 5),
> >>> };
> >>>
> >>> extern int filter_match_preds(struct event_filter *filter, void *rec);
> >>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> >>> index 16c64a2..c581750 100644
> >>> --- a/kernel/trace/trace.c
> >>> +++ b/kernel/trace/trace.c
> >>> @@ -3761,6 +3761,10 @@ static const char readme_msg[] =
> >>> "\t trigger: traceon, traceoff\n"
> >>> "\t enable_event:<system>:<event>\n"
> >>> "\t disable_event:<system>:<event>\n"
> >>> +#ifdef CONFIG_HIST_TRIGGERS
> >>> + "\t enable_hist:<system>:<event>\n"
> >>> + "\t disable_hist:<system>:<event>\n"
> >>> +#endif
> >>> #ifdef CONFIG_STACKTRACE
> >>> "\t\t stacktrace\n"
> >>> #endif
> >>> @@ -3836,6 +3840,13 @@ static const char readme_msg[] =
> >>> "\t restart a paused hist trigger.\n\n"
> >>> "\t The 'clear' param will clear the contents of a running hist\n"
> >>> "\t trigger and leave its current paused/active state.\n\n"
> >>> + "\t The enable_hist and disable_hist triggers can be used to\n"
> >>> + "\t have one event conditionally start and stop another event's\n"
> >>> + "\t already-attached hist trigger. Any number of enable_hist\n"
> >>> + "\t and disable_hist triggers can be attached to a given event,\n"
> >>> + "\t allowing that event to kick off and stop aggregations on\n"
> >>> + "\t a host of other events. See Documentation/trace/events.txt\n"
> >>> + "\t for examples.\n"
> >>> #endif
> >>> ;
> >>>
> >>> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> >>> index e6cb781..5e2e3b0 100644
> >>> --- a/kernel/trace/trace.h
> >>> +++ b/kernel/trace/trace.h
> >>> @@ -1102,8 +1102,10 @@ extern const struct file_operations event_hist_fops;
> >>>
> >>> #ifdef CONFIG_HIST_TRIGGERS
> >>> extern int register_trigger_hist_cmd(void);
> >>> +extern int register_trigger_hist_enable_disable_cmds(void);
> >>> #else
> >>> static inline int register_trigger_hist_cmd(void) { return 0; }
> >>> +static inline int register_trigger_hist_enable_disable_cmds(void) { return 0; }
> >>> #endif
> >>>
> >>> extern int register_trigger_cmds(void);
> >>> @@ -1121,6 +1123,34 @@ struct event_trigger_data {
> >>> struct list_head list;
> >>> };
> >>>
> >>> +/* Avoid typos */
> >>> +#define ENABLE_EVENT_STR "enable_event"
> >>> +#define DISABLE_EVENT_STR "disable_event"
> >>> +#define ENABLE_HIST_STR "enable_hist"
> >>> +#define DISABLE_HIST_STR "disable_hist"
> >>> +
> >>> +struct enable_trigger_data {
> >>> + struct trace_event_file *file;
> >>> + bool enable;
> >>> + bool hist;
> >>> +};
> >>> +
> >>> +extern int event_enable_trigger_print(struct seq_file *m,
> >>> + struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *data);
> >>> +extern void event_enable_trigger_free(struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *data);
> >>> +extern int event_enable_trigger_func(struct event_command *cmd_ops,
> >>> + struct trace_event_file *file,
> >>> + char *glob, char *cmd, char *param);
> >>> +extern int event_enable_register_trigger(char *glob,
> >>> + struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *data,
> >>> + struct trace_event_file *file);
> >>> +extern void event_enable_unregister_trigger(char *glob,
> >>> + struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *test,
> >>> + struct trace_event_file *file);
> >>> extern void trigger_data_free(struct event_trigger_data *data);
> >>> extern int event_trigger_init(struct event_trigger_ops *ops,
> >>> struct event_trigger_data *data);
> >>> @@ -1134,6 +1164,8 @@ extern int set_trigger_filter(char *filter_str,
> >>> struct event_trigger_data *trigger_data,
> >>> struct trace_event_file *file);
> >>> extern int register_event_command(struct event_command *cmd);
> >>> +extern int unregister_event_command(struct event_command *cmd);
> >>> +extern int register_trigger_hist_enable_disable_cmds(void);
> >>>
> >>> /**
> >>> * struct event_trigger_ops - callbacks for trace event triggers
> >>> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> >>> index 4ba7645..6a43611 100644
> >>> --- a/kernel/trace/trace_events_hist.c
> >>> +++ b/kernel/trace/trace_events_hist.c
> >>> @@ -1345,3 +1345,118 @@ __init int register_trigger_hist_cmd(void)
> >>>
> >>> return ret;
> >>> }
> >>> +
> >>> +static void
> >>> +hist_enable_trigger(struct event_trigger_data *data, void *rec)
> >>> +{
> >>> + struct enable_trigger_data *enable_data = data->private_data;
> >>> + struct event_trigger_data *test;
> >>> +
> >>> + list_for_each_entry_rcu(test, &enable_data->file->triggers, list) {
> >>> + if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
> >>> + if (enable_data->enable)
> >>> + test->paused = false;
> >>> + else
> >>> + test->paused = true;
> >>> + break;
> >>> + }
> >>> + }
> >>> +}
> >>> +
> >>> +static void
> >>> +hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
> >>> +{
> >>> + if (!data->count)
> >>> + return;
> >>> +
> >>> + if (data->count != -1)
> >>> + (data->count)--;
> >>> +
> >>> + hist_enable_trigger(data, rec);
> >>> +}
> >>> +
> >>> +static struct event_trigger_ops hist_enable_trigger_ops = {
> >>> + .func = hist_enable_trigger,
> >>> + .print = event_enable_trigger_print,
> >>> + .init = event_trigger_init,
> >>> + .free = event_enable_trigger_free,
> >>> +};
> >>> +
> >>> +static struct event_trigger_ops hist_enable_count_trigger_ops = {
> >>> + .func = hist_enable_count_trigger,
> >>> + .print = event_enable_trigger_print,
> >>> + .init = event_trigger_init,
> >>> + .free = event_enable_trigger_free,
> >>> +};
> >>> +
> >>> +static struct event_trigger_ops hist_disable_trigger_ops = {
> >>> + .func = hist_enable_trigger,
> >>> + .print = event_enable_trigger_print,
> >>> + .init = event_trigger_init,
> >>> + .free = event_enable_trigger_free,
> >>> +};
> >>> +
> >>> +static struct event_trigger_ops hist_disable_count_trigger_ops = {
> >>> + .func = hist_enable_count_trigger,
> >>> + .print = event_enable_trigger_print,
> >>> + .init = event_trigger_init,
> >>> + .free = event_enable_trigger_free,
> >>> +};
> >>> +
> >>> +static struct event_trigger_ops *
> >>> +hist_enable_get_trigger_ops(char *cmd, char *param)
> >>> +{
> >>> + struct event_trigger_ops *ops;
> >>> + bool enable;
> >>> +
> >>> + enable = (strcmp(cmd, ENABLE_HIST_STR) == 0);
> >>> +
> >>> + if (enable)
> >>> + ops = param ? &hist_enable_count_trigger_ops :
> >>> + &hist_enable_trigger_ops;
> >>> + else
> >>> + ops = param ? &hist_disable_count_trigger_ops :
> >>> + &hist_disable_trigger_ops;
> >>> +
> >>> + return ops;
> >>> +}
> >>> +
> >>> +static struct event_command trigger_hist_enable_cmd = {
> >>> + .name = ENABLE_HIST_STR,
> >>> + .trigger_type = ETT_HIST_ENABLE,
> >>> + .func = event_enable_trigger_func,
> >>> + .reg = event_enable_register_trigger,
> >>> + .unreg = event_enable_unregister_trigger,
> >>> + .get_trigger_ops = hist_enable_get_trigger_ops,
> >>> + .set_filter = set_trigger_filter,
> >>> +};
> >>> +
> >>> +static struct event_command trigger_hist_disable_cmd = {
> >>> + .name = DISABLE_HIST_STR,
> >>> + .trigger_type = ETT_HIST_ENABLE,
> >>> + .func = event_enable_trigger_func,
> >>> + .reg = event_enable_register_trigger,
> >>> + .unreg = event_enable_unregister_trigger,
> >>> + .get_trigger_ops = hist_enable_get_trigger_ops,
> >>> + .set_filter = set_trigger_filter,
> >>> +};
> >>> +
> >>> +static __init void unregister_trigger_hist_enable_disable_cmds(void)
> >>> +{
> >>> + unregister_event_command(&trigger_hist_enable_cmd);
> >>> + unregister_event_command(&trigger_hist_disable_cmd);
> >>> +}
> >>> +
> >>> +__init int register_trigger_hist_enable_disable_cmds(void)
> >>> +{
> >>> + int ret;
> >>> +
> >>> + ret = register_event_command(&trigger_hist_enable_cmd);
> >>> + if (WARN_ON(ret < 0))
> >>> + return ret;
> >>> + ret = register_event_command(&trigger_hist_disable_cmd);
> >>> + if (WARN_ON(ret < 0))
> >>> + unregister_trigger_hist_enable_disable_cmds();
> >>> +
> >>> + return ret;
> >>> +}
> >>> diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
> >>> index e80f30b..9490d8f 100644
> >>> --- a/kernel/trace/trace_events_trigger.c
> >>> +++ b/kernel/trace/trace_events_trigger.c
> >>> @@ -338,7 +338,7 @@ __init int register_event_command(struct event_command *cmd)
> >>> * Currently we only unregister event commands from __init, so mark
> >>> * this __init too.
> >>> */
> >>> -static __init int unregister_event_command(struct event_command *cmd)
> >>> +__init int unregister_event_command(struct event_command *cmd)
> >>> {
> >>> struct event_command *p, *n;
> >>> int ret = -ENODEV;
> >>> @@ -1052,15 +1052,6 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
> >>> unregister_event_command(&trigger_traceoff_cmd);
> >>> }
> >>>
> >>> -/* Avoid typos */
> >>> -#define ENABLE_EVENT_STR "enable_event"
> >>> -#define DISABLE_EVENT_STR "disable_event"
> >>> -
> >>> -struct enable_trigger_data {
> >>> - struct trace_event_file *file;
> >>> - bool enable;
> >>> -};
> >>> -
> >>> static void
> >>> event_enable_trigger(struct event_trigger_data *data, void *rec)
> >>> {
> >>> @@ -1090,14 +1081,16 @@ event_enable_count_trigger(struct event_trigger_data *data, void *rec)
> >>> event_enable_trigger(data, rec);
> >>> }
> >>>
> >>> -static int
> >>> -event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> >>> - struct event_trigger_data *data)
> >>> +int event_enable_trigger_print(struct seq_file *m,
> >>> + struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *data)
> >>> {
> >>> struct enable_trigger_data *enable_data = data->private_data;
> >>>
> >>> seq_printf(m, "%s:%s:%s",
> >>> - enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
> >>> + enable_data->hist ?
> >>> + (enable_data->enable ? ENABLE_HIST_STR : DISABLE_HIST_STR) :
> >>> + (enable_data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR),
> >>> enable_data->file->event_call->class->system,
> >>> trace_event_name(enable_data->file->event_call));
> >>>
> >>> @@ -1114,9 +1107,8 @@ event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> >>> return 0;
> >>> }
> >>>
> >>> -static void
> >>> -event_enable_trigger_free(struct event_trigger_ops *ops,
> >>> - struct event_trigger_data *data)
> >>> +void event_enable_trigger_free(struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *data)
> >>> {
> >>> struct enable_trigger_data *enable_data = data->private_data;
> >>>
> >>> @@ -1161,10 +1153,9 @@ static struct event_trigger_ops event_disable_count_trigger_ops = {
> >>> .free = event_enable_trigger_free,
> >>> };
> >>>
> >>> -static int
> >>> -event_enable_trigger_func(struct event_command *cmd_ops,
> >>> - struct trace_event_file *file,
> >>> - char *glob, char *cmd, char *param)
> >>> +int event_enable_trigger_func(struct event_command *cmd_ops,
> >>> + struct trace_event_file *file,
> >>> + char *glob, char *cmd, char *param)
> >>> {
> >>> struct trace_event_file *event_enable_file;
> >>> struct enable_trigger_data *enable_data;
> >>> @@ -1173,6 +1164,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> >>> struct trace_array *tr = file->tr;
> >>> const char *system;
> >>> const char *event;
> >>> + bool hist = false;
> >>> char *trigger;
> >>> char *number;
> >>> bool enable;
> >>> @@ -1197,8 +1189,15 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> >>> if (!event_enable_file)
> >>> goto out;
> >>>
> >>> - enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> >>> +#ifdef CONFIG_HIST_TRIGGERS
> >>> + hist = ((strcmp(cmd, ENABLE_HIST_STR) == 0) ||
> >>> + (strcmp(cmd, DISABLE_HIST_STR) == 0));
> >>>
> >>> + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
> >>> + (strcmp(cmd, ENABLE_HIST_STR) == 0));
> >>> +#else
> >>> + enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> >>> +#endif
> >>> trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
> >>>
> >>> ret = -ENOMEM;
> >>> @@ -1218,6 +1217,7 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> >>> INIT_LIST_HEAD(&trigger_data->list);
> >>> RCU_INIT_POINTER(trigger_data->filter, NULL);
> >>>
> >>> + enable_data->hist = hist;
> >>> enable_data->enable = enable;
> >>> enable_data->file = event_enable_file;
> >>> trigger_data->private_data = enable_data;
> >>> @@ -1295,10 +1295,10 @@ event_enable_trigger_func(struct event_command *cmd_ops,
> >>> goto out;
> >>> }
> >>>
> >>> -static int event_enable_register_trigger(char *glob,
> >>> - struct event_trigger_ops *ops,
> >>> - struct event_trigger_data *data,
> >>> - struct trace_event_file *file)
> >>> +int event_enable_register_trigger(char *glob,
> >>> + struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *data,
> >>> + struct trace_event_file *file)
> >>> {
> >>> struct enable_trigger_data *enable_data = data->private_data;
> >>> struct enable_trigger_data *test_enable_data;
> >>> @@ -1308,6 +1308,8 @@ static int event_enable_register_trigger(char *glob,
> >>> list_for_each_entry_rcu(test, &file->triggers, list) {
> >>> test_enable_data = test->private_data;
> >>> if (test_enable_data &&
> >>> + (test->cmd_ops->trigger_type ==
> >>> + data->cmd_ops->trigger_type) &&
> >>> (test_enable_data->file == enable_data->file)) {
> >>> ret = -EEXIST;
> >>> goto out;
> >>> @@ -1333,10 +1335,10 @@ out:
> >>> return ret;
> >>> }
> >>>
> >>> -static void event_enable_unregister_trigger(char *glob,
> >>> - struct event_trigger_ops *ops,
> >>> - struct event_trigger_data *test,
> >>> - struct trace_event_file *file)
> >>> +void event_enable_unregister_trigger(char *glob,
> >>> + struct event_trigger_ops *ops,
> >>> + struct event_trigger_data *test,
> >>> + struct trace_event_file *file)
> >>> {
> >>> struct enable_trigger_data *test_enable_data = test->private_data;
> >>> struct enable_trigger_data *enable_data;
> >>> @@ -1346,6 +1348,8 @@ static void event_enable_unregister_trigger(char *glob,
> >>> list_for_each_entry_rcu(data, &file->triggers, list) {
> >>> enable_data = data->private_data;
> >>> if (enable_data &&
> >>> + (data->cmd_ops->trigger_type ==
> >>> + test->cmd_ops->trigger_type) &&
> >>> (enable_data->file == test_enable_data->file)) {
> >>> unregistered = true;
> >>> list_del_rcu(&data->list);
> >>> @@ -1365,8 +1369,12 @@ event_enable_get_trigger_ops(char *cmd, char *param)
> >>> struct event_trigger_ops *ops;
> >>> bool enable;
> >>>
> >>> +#ifdef CONFIG_HIST_TRIGGERS
> >>> + enable = ((strcmp(cmd, ENABLE_EVENT_STR) == 0) ||
> >>> + (strcmp(cmd, ENABLE_HIST_STR) == 0));
> >>> +#else
> >>> enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
> >>> -
> >>> +#endif
> >>> if (enable)
> >>> ops = param ? &event_enable_count_trigger_ops :
> >>> &event_enable_trigger_ops;
> >>> @@ -1437,6 +1445,7 @@ __init int register_trigger_cmds(void)
> >>> register_trigger_snapshot_cmd();
> >>> register_trigger_stacktrace_cmd();
> >>> register_trigger_enable_disable_cmds();
> >>> + register_trigger_hist_enable_disable_cmds();
> >>> register_trigger_hist_cmd();
> >>>
> >>> return 0;
> >>>
> >>
> >>
> >
> >
> >
>
>

2015-07-22 20:22:40

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 12/22] tracing: Add hist trigger support for pausing and continuing a trace

Hi Masami,

On Wed, 2015-07-22 at 17:20 +0900, Masami Hiramatsu wrote:
> Hi Tom,
>
> On 2015/07/17 2:22, Tom Zanussi wrote:
> > Allow users to append 'pause' or 'continue' to an existing trigger in
> > order to have it paused or to have a paused trace continue.
> >
> > This expands the hist trigger syntax from this:
> > # echo hist:keys=xxx:vals=yyy:sort=zzz.descending \
> > [ if filter] > event/trigger
> >
> > to this:
> >
> > # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause or cont \
> > [ if filter] > event/trigger
>
> Since the only one hist trigger can be set on one event, it seems
> that we don't need keys for pause/cont/clear (e.g. hist:pause is enough).
> Anyway, I've found an odd behavior.

Right, because currently there is only one hist trigger per event, the
key is ignored, which also accounts for the 'odd' behavior.

But rather than saying it's expected and document it, I think the
conclusion from the other comments are that we'll be allowing multiple
hist triggers per event, and in that case, they need to be uniquely
identifiable by both key and filter.

>
> [root@localhost tracing]# echo 'hist:keys=parent_pid' > events/sched/sched_process_fork/trigger
> [root@localhost tracing]# echo 'hist:keys=common_pid:pause' > events/sched/sched_process_fork/trigger
> [root@localhost tracing]# cat events/sched/sched_process_fork/trigger
> hist:keys=parent_pid:vals=hitcount:sort=hitcount:size=2048 [paused]
>
> So, the second "pause" command can work with different keys.
> Moreover, I can remove it with different keys.
>

Right, this goes away once we have code that deals with multiple
histograms per event, which I'll go ahead and implement rather than
document the confusion...

Tom

> [root@localhost tracing]# echo '!hist:keys=child_pid' > events/sched/sched_process_fork/trigger
> [root@localhost tracing]# cat events/sched/sched_process_fork/trigger
> # Available triggers:
> # traceon traceoff snapshot stacktrace enable_event disable_event enable_hist disable_hist hist
>
> Thank you,
>
> >
> > Signed-off-by: Tom Zanussi <[email protected]>
> > ---
> > kernel/trace/trace.c | 5 +++++
> > kernel/trace/trace_events_hist.c | 26 +++++++++++++++++++++++---
> > 2 files changed, 28 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 5dd1fc4..547bbc8 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -3791,6 +3791,7 @@ static const char readme_msg[] =
> > "\t [:values=<field1[,field2,...]]\n"
> > "\t [:sort=field1,field2,...]\n"
> > "\t [:size=#entries]\n"
> > + "\t [:pause][:continue]\n"
> > "\t [if <filter>]\n\n"
> > "\t When a matching event is hit, an entry is added to a hash\n"
> > "\t table using the key(s) and value(s) named. Keys and values\n"
> > @@ -3821,6 +3822,10 @@ static const char readme_msg[] =
> > "\t on. The default if unspecified is 'hitcount' and the.\n"
> > "\t default sort order is 'ascending'. To sort in the opposite\n"
> > "\t direction, append .descending' to the sort key.\n\n"
> > + "\t The 'pause' param can be used to pause an existing hist\n"
> > + "\t trigger or to start a hist trigger but not log any events\n"
> > + "\t until told to do so. 'continue' can be used to start or\n"
> > + "\t restart a paused hist trigger.\n\n"
> > #endif
> > ;
> >
> > diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> > index 6bf224f..3ae58e7 100644
> > --- a/kernel/trace/trace_events_hist.c
> > +++ b/kernel/trace/trace_events_hist.c
> > @@ -78,6 +78,8 @@ struct hist_trigger_attrs {
> > char *keys_str;
> > char *vals_str;
> > char *sort_key_str;
> > + bool pause;
> > + bool cont;
> > unsigned int map_bits;
> > };
> >
> > @@ -184,6 +186,11 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
> > attrs->vals_str = kstrdup(str, GFP_KERNEL);
> > else if (!strncmp(str, "sort", strlen("sort")))
> > attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
> > + else if (!strncmp(str, "pause", strlen("pause")))
> > + attrs->pause = true;
> > + else if (!strncmp(str, "continue", strlen("continue")) ||
> > + !strncmp(str, "cont", strlen("cont")))
> > + attrs->cont = true;
> > else if (!strncmp(str, "size", strlen("size"))) {
> > int map_bits = parse_map_size(str);
> >
> > @@ -843,7 +850,10 @@ static int event_hist_trigger_print(struct seq_file *m,
> > if (data->filter_str)
> > seq_printf(m, " if %s", data->filter_str);
> >
> > - seq_puts(m, " [active]");
> > + if (data->paused)
> > + seq_puts(m, " [paused]");
> > + else
> > + seq_puts(m, " [active]");
> >
> > seq_putc(m, '\n');
> >
> > @@ -882,16 +892,25 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> > struct event_trigger_data *data,
> > struct trace_event_file *file)
> > {
> > + struct hist_trigger_data *hist_data = data->private_data;
> > struct event_trigger_data *test;
> > int ret = 0;
> >
> > list_for_each_entry_rcu(test, &file->triggers, list) {
> > if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
> > - ret = -EEXIST;
> > + if (hist_data->attrs->pause)
> > + test->paused = true;
> > + else if (hist_data->attrs->cont)
> > + test->paused = false;
> > + else
> > + ret = -EEXIST;
> > goto out;
> > }
> > }
> >
> > + if (hist_data->attrs->pause)
> > + data->paused = true;
> > +
> > if (data->ops->init) {
> > ret = data->ops->init(data->ops, data);
> > if (ret < 0)
> > @@ -984,7 +1003,8 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
> > * triggers registered a failure too.
> > */
> > if (!ret) {
> > - ret = -ENOENT;
> > + if (!(attrs->pause || attrs->cont))
> > + ret = -ENOENT;
> > goto out_free;
> > } else if (ret < 0)
> > goto out_free;
> >
>
>

2015-07-22 20:25:03

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 13/22] tracing: Add hist trigger support for clearing a trace

On Wed, 2015-07-22 at 22:50 +0900, Masami Hiramatsu wrote:
> On 2015/07/17 2:22, Tom Zanussi wrote:
> > Allow users to append 'clear' to an existing trigger in order to have
> > the hash table cleared.
> >
> > This expands the hist trigger syntax from this:
> > # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause/cont \
> > [ if filter] > event/trigger
> >
> > to this:
> >
> > # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause/cont/clear \
> > [ if filter] > event/trigger
>
> By the way, since pause/cont/clear is not the trigger but the commands
> (which executed immediately), I think it should be a write fops of
> "hist" special file.
>
> e.g. to clear the histogram, write 0 to hist
>
> # echo 0 > event/hist
>
> And pause/cont will be 1 and 2.
>
> # echo 1 > event/hist <- pause
> and
> # echo 2 > event/hist <- continue
>
> What would you think ?
>

I do think it makes sense, and like the idea. But if we allow multiple
histograms per event, does it still work? (I mean in that case, we'd
like to pause/continue/clear them individually, right?)

Tom

> Thanks,
>
>
> >
> > Signed-off-by: Tom Zanussi <[email protected]>
> > ---
> > kernel/trace/trace.c | 4 +++-
> > kernel/trace/trace_events_hist.c | 25 ++++++++++++++++++++++++-
> > 2 files changed, 27 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 547bbc8..27daa28 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -3791,7 +3791,7 @@ static const char readme_msg[] =
> > "\t [:values=<field1[,field2,...]]\n"
> > "\t [:sort=field1,field2,...]\n"
> > "\t [:size=#entries]\n"
> > - "\t [:pause][:continue]\n"
> > + "\t [:pause][:continue][:clear]\n"
> > "\t [if <filter>]\n\n"
> > "\t When a matching event is hit, an entry is added to a hash\n"
> > "\t table using the key(s) and value(s) named. Keys and values\n"
> > @@ -3826,6 +3826,8 @@ static const char readme_msg[] =
> > "\t trigger or to start a hist trigger but not log any events\n"
> > "\t until told to do so. 'continue' can be used to start or\n"
> > "\t restart a paused hist trigger.\n\n"
> > + "\t The 'clear' param will clear the contents of a running hist\n"
> > + "\t trigger and leave its current paused/active state.\n\n"
> > #endif
> > ;
> >
> > diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> > index 3ae58e7..d8259fe 100644
> > --- a/kernel/trace/trace_events_hist.c
> > +++ b/kernel/trace/trace_events_hist.c
> > @@ -80,6 +80,7 @@ struct hist_trigger_attrs {
> > char *sort_key_str;
> > bool pause;
> > bool cont;
> > + bool clear;
> > unsigned int map_bits;
> > };
> >
> > @@ -188,6 +189,8 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
> > attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
> > else if (!strncmp(str, "pause", strlen("pause")))
> > attrs->pause = true;
> > + else if (!strncmp(str, "clear", strlen("clear")))
> > + attrs->clear = true;
> > else if (!strncmp(str, "continue", strlen("continue")) ||
> > !strncmp(str, "cont", strlen("cont")))
> > attrs->cont = true;
> > @@ -888,6 +891,24 @@ static struct event_trigger_ops *event_hist_get_trigger_ops(char *cmd,
> > return &event_hist_trigger_ops;
> > }
> >
> > +static void hist_clear(struct event_trigger_data *data)
> > +{
> > + struct hist_trigger_data *hist_data = data->private_data;
> > + bool paused;
> > +
> > + paused = data->paused;
> > + data->paused = true;
> > +
> > + synchronize_sched();
> > +
> > + tracing_map_clear(hist_data->map);
> > +
> > + atomic64_set(&hist_data->total_hits, 0);
> > + atomic64_set(&hist_data->drops, 0);
> > +
> > + data->paused = paused;
> > +}
> > +
> > static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> > struct event_trigger_data *data,
> > struct trace_event_file *file)
> > @@ -902,6 +923,8 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> > test->paused = true;
> > else if (hist_data->attrs->cont)
> > test->paused = false;
> > + else if (hist_data->attrs->clear)
> > + hist_clear(test);
> > else
> > ret = -EEXIST;
> > goto out;
> > @@ -1003,7 +1026,7 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
> > * triggers registered a failure too.
> > */
> > if (!ret) {
> > - if (!(attrs->pause || attrs->cont))
> > + if (!(attrs->pause || attrs->cont || attrs->clear))
> > ret = -ENOENT;
> > goto out_free;
> > } else if (ret < 0)
> >
>
>

2015-07-23 01:55:22

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v9 00/22] tracing: 'hist' triggers

Hi Brendan,

On Wed, 2015-07-22 at 11:29 -0700, Brendan Gregg wrote:
> G'Day Tom,
>
> On Thu, Jul 16, 2015 at 10:22 AM, Tom Zanussi
> <[email protected]> wrote:
> >
> > This is v9 of the 'hist triggers' patchset.
> >
> [...]
>
> I've browsed the functionality (sorry, catching up), and it looks like
> this will solve a number of common problems. But it seems
> tantalizingly close to solving a few more. These may already be on
> your future todo list.
>
> A) CPU stack profiling
>
> Kernel stacktrace as a key will be hugely useful; is there a way to
> enable this for a sampling profile? (eg, what perf record -F 99 does).
> I take CPU profiles daily, and would prefer to aggregate stacks
> in-kernel. Also, would like user stacktrace as a key (even if it's
> just the hex).
>

This wasn't on my todo list but I can see how it would be useful. On
the list now. ;-)

> B) Key buckets
>
> Eg, imagine:
>
> echo 'hist:keys=common_pid.execname,count.log2:val=count' >
> /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
>
> to get a log2 bucketized histogram of syscall read request size. Same
> for any value where using the value as a key gets too verbose, and you
> just want a rough look at the distribution. (Would make it more
> readable if it could also be sorted by the log2 value.)
>

This I actually had an early implementation of, which I plan on
reviving...

> C) Latency as a bucket key
>
> With kprobes, we could then have a log2 histogram of any function call
> latency, collected efficiently. (There's already the function timers
> in ftrace, which I'm using from function_graph with filters sets to
> only match the target function.)
>

My original thought for doing this kind of thing was to generalize the
concept of a 'difference'. I used it in v1 as a way to calculate the
difference between requested and allocated sizes for memory allocations,
which was kind of pointless, though convenient. The real value would of
course be in applying it to inter-event values rather than intra-event.
In this case it would be an inter-event difference between timestamps.

And in my previous patchset, I had a 'function_hist' tracer, similar to
and in fact based on the same code as function_graph, that would simply
aggregate hitcounts for every function call in the kernel, which indeed
was pretty efficient. So the pieces are or have been there to do
something like this, just a matter of putting them together.

> ... Those are the other common use cases, that the hist functionality
> seemed suited for. Beyond that gets more custom, and we can use eBPF.
>

Exactly.

Tom

> Brendan

Subject: Re: Re: [PATCH v9 12/22] tracing: Add hist trigger support for pausing and continuing a trace

On 2015/07/23 5:22, Tom Zanussi wrote:
> Hi Masami,
>
> On Wed, 2015-07-22 at 17:20 +0900, Masami Hiramatsu wrote:
>> Hi Tom,
>>
>> On 2015/07/17 2:22, Tom Zanussi wrote:
>>> Allow users to append 'pause' or 'continue' to an existing trigger in
>>> order to have it paused or to have a paused trace continue.
>>>
>>> This expands the hist trigger syntax from this:
>>> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending \
>>> [ if filter] > event/trigger
>>>
>>> to this:
>>>
>>> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause or cont \
>>> [ if filter] > event/trigger
>>
>> Since the only one hist trigger can be set on one event, it seems
>> that we don't need keys for pause/cont/clear (e.g. hist:pause is enough).
>> Anyway, I've found an odd behavior.
>
> Right, because currently there is only one hist trigger per event, the
> key is ignored, which also accounts for the 'odd' behavior.
>
> But rather than saying it's expected and document it, I think the
> conclusion from the other comments are that we'll be allowing multiple
> hist triggers per event, and in that case, they need to be uniquely
> identifiable by both key and filter.

Agreed.

BTW, as I've wrote in another reply, pause/cont/clear commands should be
for the "hist" file, not for trigger. So, below command will be better.

# echo pause:name=foo > event/hist
# echo clear:keys=xxx > event/hist

Thank you,



>> [root@localhost tracing]# echo 'hist:keys=parent_pid' > events/sched/sched_process_fork/trigger
>> [root@localhost tracing]# echo 'hist:keys=common_pid:pause' > events/sched/sched_process_fork/trigger
>> [root@localhost tracing]# cat events/sched/sched_process_fork/trigger
>> hist:keys=parent_pid:vals=hitcount:sort=hitcount:size=2048 [paused]
>>
>> So, the second "pause" command can work with different keys.
>> Moreover, I can remove it with different keys.
>>
>
> Right, this goes away once we have code that deals with multiple
> histograms per event, which I'll go ahead and implement rather than
> document the confusion...
>
> Tom
>
>> [root@localhost tracing]# echo '!hist:keys=child_pid' > events/sched/sched_process_fork/trigger
>> [root@localhost tracing]# cat events/sched/sched_process_fork/trigger
>> # Available triggers:
>> # traceon traceoff snapshot stacktrace enable_event disable_event enable_hist disable_hist hist
>>
>> Thank you,
>>
>>>
>>> Signed-off-by: Tom Zanussi <[email protected]>
>>> ---
>>> kernel/trace/trace.c | 5 +++++
>>> kernel/trace/trace_events_hist.c | 26 +++++++++++++++++++++++---
>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
>>> index 5dd1fc4..547bbc8 100644
>>> --- a/kernel/trace/trace.c
>>> +++ b/kernel/trace/trace.c
>>> @@ -3791,6 +3791,7 @@ static const char readme_msg[] =
>>> "\t [:values=<field1[,field2,...]]\n"
>>> "\t [:sort=field1,field2,...]\n"
>>> "\t [:size=#entries]\n"
>>> + "\t [:pause][:continue]\n"
>>> "\t [if <filter>]\n\n"
>>> "\t When a matching event is hit, an entry is added to a hash\n"
>>> "\t table using the key(s) and value(s) named. Keys and values\n"
>>> @@ -3821,6 +3822,10 @@ static const char readme_msg[] =
>>> "\t on. The default if unspecified is 'hitcount' and the.\n"
>>> "\t default sort order is 'ascending'. To sort in the opposite\n"
>>> "\t direction, append .descending' to the sort key.\n\n"
>>> + "\t The 'pause' param can be used to pause an existing hist\n"
>>> + "\t trigger or to start a hist trigger but not log any events\n"
>>> + "\t until told to do so. 'continue' can be used to start or\n"
>>> + "\t restart a paused hist trigger.\n\n"
>>> #endif
>>> ;
>>>
>>> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
>>> index 6bf224f..3ae58e7 100644
>>> --- a/kernel/trace/trace_events_hist.c
>>> +++ b/kernel/trace/trace_events_hist.c
>>> @@ -78,6 +78,8 @@ struct hist_trigger_attrs {
>>> char *keys_str;
>>> char *vals_str;
>>> char *sort_key_str;
>>> + bool pause;
>>> + bool cont;
>>> unsigned int map_bits;
>>> };
>>>
>>> @@ -184,6 +186,11 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
>>> attrs->vals_str = kstrdup(str, GFP_KERNEL);
>>> else if (!strncmp(str, "sort", strlen("sort")))
>>> attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
>>> + else if (!strncmp(str, "pause", strlen("pause")))
>>> + attrs->pause = true;
>>> + else if (!strncmp(str, "continue", strlen("continue")) ||
>>> + !strncmp(str, "cont", strlen("cont")))
>>> + attrs->cont = true;
>>> else if (!strncmp(str, "size", strlen("size"))) {
>>> int map_bits = parse_map_size(str);
>>>
>>> @@ -843,7 +850,10 @@ static int event_hist_trigger_print(struct seq_file *m,
>>> if (data->filter_str)
>>> seq_printf(m, " if %s", data->filter_str);
>>>
>>> - seq_puts(m, " [active]");
>>> + if (data->paused)
>>> + seq_puts(m, " [paused]");
>>> + else
>>> + seq_puts(m, " [active]");
>>>
>>> seq_putc(m, '\n');
>>>
>>> @@ -882,16 +892,25 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
>>> struct event_trigger_data *data,
>>> struct trace_event_file *file)
>>> {
>>> + struct hist_trigger_data *hist_data = data->private_data;
>>> struct event_trigger_data *test;
>>> int ret = 0;
>>>
>>> list_for_each_entry_rcu(test, &file->triggers, list) {
>>> if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
>>> - ret = -EEXIST;
>>> + if (hist_data->attrs->pause)
>>> + test->paused = true;
>>> + else if (hist_data->attrs->cont)
>>> + test->paused = false;
>>> + else
>>> + ret = -EEXIST;
>>> goto out;
>>> }
>>> }
>>>
>>> + if (hist_data->attrs->pause)
>>> + data->paused = true;
>>> +
>>> if (data->ops->init) {
>>> ret = data->ops->init(data->ops, data);
>>> if (ret < 0)
>>> @@ -984,7 +1003,8 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
>>> * triggers registered a failure too.
>>> */
>>> if (!ret) {
>>> - ret = -ENOENT;
>>> + if (!(attrs->pause || attrs->cont))
>>> + ret = -ENOENT;
>>> goto out_free;
>>> } else if (ret < 0)
>>> goto out_free;
>>>
>>
>>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

2015-07-23 15:58:43

by Tom Zanussi

[permalink] [raw]
Subject: Re: Re: [PATCH v9 12/22] tracing: Add hist trigger support for pausing and continuing a trace

On Thu, 2015-07-23 at 23:06 +0900, Masami Hiramatsu wrote:
> On 2015/07/23 5:22, Tom Zanussi wrote:
> > Hi Masami,
> >
> > On Wed, 2015-07-22 at 17:20 +0900, Masami Hiramatsu wrote:
> >> Hi Tom,
> >>
> >> On 2015/07/17 2:22, Tom Zanussi wrote:
> >>> Allow users to append 'pause' or 'continue' to an existing trigger in
> >>> order to have it paused or to have a paused trace continue.
> >>>
> >>> This expands the hist trigger syntax from this:
> >>> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending \
> >>> [ if filter] > event/trigger
> >>>
> >>> to this:
> >>>
> >>> # echo hist:keys=xxx:vals=yyy:sort=zzz.descending:pause or cont \
> >>> [ if filter] > event/trigger
> >>
> >> Since the only one hist trigger can be set on one event, it seems
> >> that we don't need keys for pause/cont/clear (e.g. hist:pause is enough).
> >> Anyway, I've found an odd behavior.
> >
> > Right, because currently there is only one hist trigger per event, the
> > key is ignored, which also accounts for the 'odd' behavior.
> >
> > But rather than saying it's expected and document it, I think the
> > conclusion from the other comments are that we'll be allowing multiple
> > hist triggers per event, and in that case, they need to be uniquely
> > identifiable by both key and filter.
>
> Agreed.
>
> BTW, as I've wrote in another reply, pause/cont/clear commands should be
> for the "hist" file, not for trigger. So, below command will be better.
>
> # echo pause:name=foo > event/hist
> # echo clear:keys=xxx > event/hist
>

Right, that all makes sense, I think we're on the same page with
everything now..

Thanks,

Tom

> Thank you,
>
>
>
> >> [root@localhost tracing]# echo 'hist:keys=parent_pid' > events/sched/sched_process_fork/trigger
> >> [root@localhost tracing]# echo 'hist:keys=common_pid:pause' > events/sched/sched_process_fork/trigger
> >> [root@localhost tracing]# cat events/sched/sched_process_fork/trigger
> >> hist:keys=parent_pid:vals=hitcount:sort=hitcount:size=2048 [paused]
> >>
> >> So, the second "pause" command can work with different keys.
> >> Moreover, I can remove it with different keys.
> >>
> >
> > Right, this goes away once we have code that deals with multiple
> > histograms per event, which I'll go ahead and implement rather than
> > document the confusion...
> >
> > Tom
> >
> >> [root@localhost tracing]# echo '!hist:keys=child_pid' > events/sched/sched_process_fork/trigger
> >> [root@localhost tracing]# cat events/sched/sched_process_fork/trigger
> >> # Available triggers:
> >> # traceon traceoff snapshot stacktrace enable_event disable_event enable_hist disable_hist hist
> >>
> >> Thank you,
> >>
> >>>
> >>> Signed-off-by: Tom Zanussi <[email protected]>
> >>> ---
> >>> kernel/trace/trace.c | 5 +++++
> >>> kernel/trace/trace_events_hist.c | 26 +++++++++++++++++++++++---
> >>> 2 files changed, 28 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> >>> index 5dd1fc4..547bbc8 100644
> >>> --- a/kernel/trace/trace.c
> >>> +++ b/kernel/trace/trace.c
> >>> @@ -3791,6 +3791,7 @@ static const char readme_msg[] =
> >>> "\t [:values=<field1[,field2,...]]\n"
> >>> "\t [:sort=field1,field2,...]\n"
> >>> "\t [:size=#entries]\n"
> >>> + "\t [:pause][:continue]\n"
> >>> "\t [if <filter>]\n\n"
> >>> "\t When a matching event is hit, an entry is added to a hash\n"
> >>> "\t table using the key(s) and value(s) named. Keys and values\n"
> >>> @@ -3821,6 +3822,10 @@ static const char readme_msg[] =
> >>> "\t on. The default if unspecified is 'hitcount' and the.\n"
> >>> "\t default sort order is 'ascending'. To sort in the opposite\n"
> >>> "\t direction, append .descending' to the sort key.\n\n"
> >>> + "\t The 'pause' param can be used to pause an existing hist\n"
> >>> + "\t trigger or to start a hist trigger but not log any events\n"
> >>> + "\t until told to do so. 'continue' can be used to start or\n"
> >>> + "\t restart a paused hist trigger.\n\n"
> >>> #endif
> >>> ;
> >>>
> >>> diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
> >>> index 6bf224f..3ae58e7 100644
> >>> --- a/kernel/trace/trace_events_hist.c
> >>> +++ b/kernel/trace/trace_events_hist.c
> >>> @@ -78,6 +78,8 @@ struct hist_trigger_attrs {
> >>> char *keys_str;
> >>> char *vals_str;
> >>> char *sort_key_str;
> >>> + bool pause;
> >>> + bool cont;
> >>> unsigned int map_bits;
> >>> };
> >>>
> >>> @@ -184,6 +186,11 @@ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
> >>> attrs->vals_str = kstrdup(str, GFP_KERNEL);
> >>> else if (!strncmp(str, "sort", strlen("sort")))
> >>> attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
> >>> + else if (!strncmp(str, "pause", strlen("pause")))
> >>> + attrs->pause = true;
> >>> + else if (!strncmp(str, "continue", strlen("continue")) ||
> >>> + !strncmp(str, "cont", strlen("cont")))
> >>> + attrs->cont = true;
> >>> else if (!strncmp(str, "size", strlen("size"))) {
> >>> int map_bits = parse_map_size(str);
> >>>
> >>> @@ -843,7 +850,10 @@ static int event_hist_trigger_print(struct seq_file *m,
> >>> if (data->filter_str)
> >>> seq_printf(m, " if %s", data->filter_str);
> >>>
> >>> - seq_puts(m, " [active]");
> >>> + if (data->paused)
> >>> + seq_puts(m, " [paused]");
> >>> + else
> >>> + seq_puts(m, " [active]");
> >>>
> >>> seq_putc(m, '\n');
> >>>
> >>> @@ -882,16 +892,25 @@ static int hist_register_trigger(char *glob, struct event_trigger_ops *ops,
> >>> struct event_trigger_data *data,
> >>> struct trace_event_file *file)
> >>> {
> >>> + struct hist_trigger_data *hist_data = data->private_data;
> >>> struct event_trigger_data *test;
> >>> int ret = 0;
> >>>
> >>> list_for_each_entry_rcu(test, &file->triggers, list) {
> >>> if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
> >>> - ret = -EEXIST;
> >>> + if (hist_data->attrs->pause)
> >>> + test->paused = true;
> >>> + else if (hist_data->attrs->cont)
> >>> + test->paused = false;
> >>> + else
> >>> + ret = -EEXIST;
> >>> goto out;
> >>> }
> >>> }
> >>>
> >>> + if (hist_data->attrs->pause)
> >>> + data->paused = true;
> >>> +
> >>> if (data->ops->init) {
> >>> ret = data->ops->init(data->ops, data);
> >>> if (ret < 0)
> >>> @@ -984,7 +1003,8 @@ static int event_hist_trigger_func(struct event_command *cmd_ops,
> >>> * triggers registered a failure too.
> >>> */
> >>> if (!ret) {
> >>> - ret = -ENOENT;
> >>> + if (!(attrs->pause || attrs->cont))
> >>> + ret = -ENOENT;
> >>> goto out_free;
> >>> } else if (ret < 0)
> >>> goto out_free;
> >>>
> >>
> >>
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> >
>
>