2008-07-09 15:03:19

by Mathieu Desnoyers

[permalink] [raw]
Subject: [patch 01/15] Kernel Tracepoints

Implementation of kernel tracepoints. Inspired from the Linux Kernel Markers.
Allows complete typing verification. No format string required. See the
tracepoint Documentation and Samples patches for usage examples.

Changelog :
- Use #name ":" #proto as string to identify the tracepoint in the
tracepoint table. This will make sure not type mismatch happens due to
connexion of a probe with the wrong type to a tracepoint declared with
the same name in a different header.
- Add tracepoint_entry_free_old.

Masami Hiramatsu <[email protected]> :
Tested on x86-64.

Performance impact of a tracepoint : same as markers, except that it adds about
70 bytes of instructions in an unlikely branch of each instrumented function
(the for loop, the stack setup and the function call). It currently adds a
memory read, a test and a conditional branch at the instrumentation site (in the
hot path). Immediate values will eventually change this into a load immediate,
test and branch, which removes the memory read which will make the i-cache
impact smaller (changing the memory read for a load immediate removes 3-4 bytes
per site on x86_32 (depending on mov prefixes), or 7-8 bytes on x86_64, it also
saves the d-cache hit).

About the performance impact of tracepoints (which is comparable to markers),
even without immediate values optimizations, tests done by Hideo Aoki on ia64
show no regression. His test case was using hackbench on a kernel where
scheduler instrumentation (about 5 events in code scheduler code) was added.


Quoting Hideo Aoki about Markers :

I evaluated overhead of kernel marker using linux-2.6-sched-fixes
git tree, which includes several markers for LTTng, using an ia64
server.

While the immediate trace mark feature isn't implemented on ia64,
there is no major performance regression. So, I think that we
don't have any issues to propose merging marker point patches
into Linus's tree from the viewpoint of performance impact.

I prepared two kernels to evaluate. The first one was compiled
without CONFIG_MARKERS. The second one was enabled CONFIG_MARKERS.

I downloaded the original hackbench from the following URL:
http://devresources.linux-foundation.org/craiger/hackbench/src/hackbench.c

I ran hackbench 5 times in each condition and calculated the
average and difference between the kernels.

The parameter of hackbench: every 50 from 50 to 800
The number of CPUs of the server: 2, 4, and 8

Below is the results. As you can see, major performance
regression wasn't found in any case. Even if number of processes
increases, differences between marker-enabled kernel and marker-
disabled kernel doesn't increase. Moreover, if number of CPUs
increases, the differences doesn't increase either.

Curiously, marker-enabled kernel is better than marker-disabled
kernel in more than half cases, although I guess it comes from
the difference of memory access pattern.


* 2 CPUs

Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 4.811 | 4.872 | +0.061 | +1.27 |
100 | 9.854 | 10.309 | +0.454 | +4.61 |
150 | 15.602 | 15.040 | -0.562 | -3.6 |
200 | 20.489 | 20.380 | -0.109 | -0.53 |
250 | 25.798 | 25.652 | -0.146 | -0.56 |
300 | 31.260 | 30.797 | -0.463 | -1.48 |
350 | 36.121 | 35.770 | -0.351 | -0.97 |
400 | 42.288 | 42.102 | -0.186 | -0.44 |
450 | 47.778 | 47.253 | -0.526 | -1.1 |
500 | 51.953 | 52.278 | +0.325 | +0.63 |
550 | 58.401 | 57.700 | -0.701 | -1.2 |
600 | 63.334 | 63.222 | -0.112 | -0.18 |
650 | 68.816 | 68.511 | -0.306 | -0.44 |
700 | 74.667 | 74.088 | -0.579 | -0.78 |
750 | 78.612 | 79.582 | +0.970 | +1.23 |
800 | 85.431 | 85.263 | -0.168 | -0.2 |
--------------------------------------------------------------

* 4 CPUs

Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.586 | 2.584 | -0.003 | -0.1 |
100 | 5.254 | 5.283 | +0.030 | +0.56 |
150 | 8.012 | 8.074 | +0.061 | +0.76 |
200 | 11.172 | 11.000 | -0.172 | -1.54 |
250 | 13.917 | 14.036 | +0.119 | +0.86 |
300 | 16.905 | 16.543 | -0.362 | -2.14 |
350 | 19.901 | 20.036 | +0.135 | +0.68 |
400 | 22.908 | 23.094 | +0.186 | +0.81 |
450 | 26.273 | 26.101 | -0.172 | -0.66 |
500 | 29.554 | 29.092 | -0.461 | -1.56 |
550 | 32.377 | 32.274 | -0.103 | -0.32 |
600 | 35.855 | 35.322 | -0.533 | -1.49 |
650 | 39.192 | 38.388 | -0.804 | -2.05 |
700 | 41.744 | 41.719 | -0.025 | -0.06 |
750 | 45.016 | 44.496 | -0.520 | -1.16 |
800 | 48.212 | 47.603 | -0.609 | -1.26 |
--------------------------------------------------------------

* 8 CPUs

Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.094 | 2.072 | -0.022 | -1.07 |
100 | 4.162 | 4.273 | +0.111 | +2.66 |
150 | 6.485 | 6.540 | +0.055 | +0.84 |
200 | 8.556 | 8.478 | -0.078 | -0.91 |
250 | 10.458 | 10.258 | -0.200 | -1.91 |
300 | 12.425 | 12.750 | +0.325 | +2.62 |
350 | 14.807 | 14.839 | +0.032 | +0.22 |
400 | 16.801 | 16.959 | +0.158 | +0.94 |
450 | 19.478 | 19.009 | -0.470 | -2.41 |
500 | 21.296 | 21.504 | +0.208 | +0.98 |
550 | 23.842 | 23.979 | +0.137 | +0.57 |
600 | 26.309 | 26.111 | -0.198 | -0.75 |
650 | 28.705 | 28.446 | -0.259 | -0.9 |
700 | 31.233 | 31.394 | +0.161 | +0.52 |
750 | 34.064 | 33.720 | -0.344 | -1.01 |
800 | 36.320 | 36.114 | -0.206 | -0.57 |
--------------------------------------------------------------

Best regards,
Hideo


P.S. When I compiled the linux-2.6-sched-fixes tree on ia64, I
had to revert the following git commit since pteval_t is defined
on x86 only.

commit 8686f2b37e7394b51dd6593678cbfd85ecd28c65
Date: Tue May 6 15:42:40 2008 -0700

generic, x86, PAT: fix mprotect


Signed-off-by: Mathieu Desnoyers <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
CC: 'Peter Zijlstra' <[email protected]>
CC: "Frank Ch. Eigler" <[email protected]>
CC: 'Ingo Molnar' <[email protected]>
CC: 'Hideo AOKI' <[email protected]>
CC: Takashi Nishiie <[email protected]>
CC: 'Steven Rostedt' <[email protected]>
CC: Alexander Viro <[email protected]>
CC: Eduard - Gabriel Munteanu <[email protected]>
---
include/asm-generic/vmlinux.lds.h | 6
include/linux/module.h | 17 +
include/linux/tracepoint.h | 123 +++++++++
init/Kconfig | 7
kernel/Makefile | 1
kernel/module.c | 66 +++++
kernel/tracepoint.c | 474 ++++++++++++++++++++++++++++++++++++++
7 files changed, 692 insertions(+), 2 deletions(-)

Index: linux-2.6-lttng/init/Kconfig
===================================================================
--- linux-2.6-lttng.orig/init/Kconfig 2008-07-09 10:55:46.000000000 -0400
+++ linux-2.6-lttng/init/Kconfig 2008-07-09 10:55:58.000000000 -0400
@@ -782,6 +782,13 @@ config PROFILING
Say Y here to enable the extended profiling support mechanisms used
by profilers such as OProfile.

+config TRACEPOINTS
+ bool "Activate tracepoints"
+ default y
+ help
+ Place an empty function call at each tracepoint site. Can be
+ dynamically changed for a probe function.
+
config MARKERS
bool "Activate markers"
help
Index: linux-2.6-lttng/kernel/Makefile
===================================================================
--- linux-2.6-lttng.orig/kernel/Makefile 2008-07-09 10:55:46.000000000 -0400
+++ linux-2.6-lttng/kernel/Makefile 2008-07-09 10:55:58.000000000 -0400
@@ -77,6 +77,7 @@ obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
obj-$(CONFIG_MARKERS) += marker.o
+obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
obj-$(CONFIG_LATENCYTOP) += latencytop.o
obj-$(CONFIG_FTRACE) += trace/
obj-$(CONFIG_TRACING) += trace/
Index: linux-2.6-lttng/include/linux/tracepoint.h
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/include/linux/tracepoint.h 2008-07-09 10:55:58.000000000 -0400
@@ -0,0 +1,123 @@
+#ifndef _LINUX_TRACEPOINT_H
+#define _LINUX_TRACEPOINT_H
+
+/*
+ * Kernel Tracepoint API.
+ *
+ * See Documentation/tracepoint.txt.
+ *
+ * (C) Copyright 2008 Mathieu Desnoyers <[email protected]>
+ *
+ * Heavily inspired from the Linux Kernel Markers.
+ *
+ * This file is released under the GPLv2.
+ * See the file COPYING for more details.
+ */
+
+#include <linux/types.h>
+
+struct module;
+struct tracepoint;
+
+struct tracepoint {
+ const char *name; /* Tracepoint name */
+ int state; /* State. */
+ void **funcs;
+} __attribute__((aligned(8)));
+
+
+#define TPPROTO(args...) args
+#define TPARGS(args...) args
+
+#ifdef CONFIG_TRACEPOINTS
+
+#define __DO_TRACE(tp, proto, args) \
+ do { \
+ int i; \
+ void **funcs; \
+ preempt_disable(); \
+ funcs = (tp)->funcs; \
+ smp_read_barrier_depends(); \
+ if (funcs) { \
+ for (i = 0; funcs[i]; i++) { \
+ ((void(*)(proto))(funcs[i]))(args); \
+ } \
+ } \
+ preempt_enable(); \
+ } while (0)
+
+/*
+ * Make sure the alignment of the structure in the __tracepoints section will
+ * not add unwanted padding between the beginning of the section and the
+ * structure. Force alignment to the same alignment as the section start.
+ */
+#define DEFINE_TRACE(name, proto, args) \
+ static inline void trace_##name(proto) \
+ { \
+ static const char __tpstrtab_##name[] \
+ __attribute__((section("__tracepoints_strings"))) \
+ = #name ":" #proto; \
+ static struct tracepoint __tracepoint_##name \
+ __attribute__((section("__tracepoints"), aligned(8))) = \
+ { __tpstrtab_##name, 0, NULL }; \
+ if (unlikely(__tracepoint_##name.state)) \
+ __DO_TRACE(&__tracepoint_##name, \
+ TPPROTO(proto), TPARGS(args)); \
+ } \
+ static inline int register_trace_##name(void (*probe)(proto)) \
+ { \
+ return tracepoint_probe_register(#name ":" #proto, \
+ (void *)probe); \
+ } \
+ static inline void unregister_trace_##name(void (*probe)(proto))\
+ { \
+ tracepoint_probe_unregister(#name ":" #proto, \
+ (void *)probe); \
+ }
+
+extern void tracepoint_update_probe_range(struct tracepoint *begin,
+ struct tracepoint *end);
+
+#else /* !CONFIG_TRACEPOINTS */
+#define DEFINE_TRACE(name, proto, args) \
+ static inline void _do_trace_##name(struct tracepoint *tp, proto) \
+ { } \
+ static inline void trace_##name(proto) \
+ { } \
+ static inline int register_trace_##name(void (*probe)(proto)) \
+ { \
+ return -ENOSYS; \
+ } \
+ static inline void unregister_trace_##name(void (*probe)(proto))\
+ { }
+
+static inline void tracepoint_update_probe_range(struct tracepoint *begin,
+ struct tracepoint *end)
+{ }
+#endif /* CONFIG_TRACEPOINTS */
+
+/*
+ * Connect a probe to a tracepoint.
+ * Internal API, should not be used directly.
+ */
+extern int tracepoint_probe_register(const char *name, void *probe);
+
+/*
+ * Disconnect a probe from a tracepoint.
+ * Internal API, should not be used directly.
+ */
+extern int tracepoint_probe_unregister(const char *name, void *probe);
+
+struct tracepoint_iter {
+ struct module *module;
+ struct tracepoint *tracepoint;
+};
+
+extern void tracepoint_iter_start(struct tracepoint_iter *iter);
+extern void tracepoint_iter_next(struct tracepoint_iter *iter);
+extern void tracepoint_iter_stop(struct tracepoint_iter *iter);
+extern void tracepoint_iter_reset(struct tracepoint_iter *iter);
+extern int tracepoint_get_iter_range(struct tracepoint **tracepoint,
+ struct tracepoint *begin, struct tracepoint *end);
+
+#endif
Index: linux-2.6-lttng/include/asm-generic/vmlinux.lds.h
===================================================================
--- linux-2.6-lttng.orig/include/asm-generic/vmlinux.lds.h 2008-07-09 10:55:46.000000000 -0400
+++ linux-2.6-lttng/include/asm-generic/vmlinux.lds.h 2008-07-09 10:55:58.000000000 -0400
@@ -52,7 +52,10 @@
. = ALIGN(8); \
VMLINUX_SYMBOL(__start___markers) = .; \
*(__markers) \
- VMLINUX_SYMBOL(__stop___markers) = .;
+ VMLINUX_SYMBOL(__stop___markers) = .; \
+ VMLINUX_SYMBOL(__start___tracepoints) = .; \
+ *(__tracepoints) \
+ VMLINUX_SYMBOL(__stop___tracepoints) = .;

#define RO_DATA(align) \
. = ALIGN((align)); \
@@ -61,6 +64,7 @@
*(.rodata) *(.rodata.*) \
*(__vermagic) /* Kernel version magic */ \
*(__markers_strings) /* Markers: strings */ \
+ *(__tracepoints_strings)/* Tracepoints: strings */ \
} \
\
.rodata1 : AT(ADDR(.rodata1) - LOAD_OFFSET) { \
Index: linux-2.6-lttng/kernel/tracepoint.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6-lttng/kernel/tracepoint.c 2008-07-09 10:55:58.000000000 -0400
@@ -0,0 +1,474 @@
+/*
+ * Copyright (C) 2008 Mathieu Desnoyers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/jhash.h>
+#include <linux/list.h>
+#include <linux/rcupdate.h>
+#include <linux/tracepoint.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+
+extern struct tracepoint __start___tracepoints[];
+extern struct tracepoint __stop___tracepoints[];
+
+/* Set to 1 to enable tracepoint debug output */
+static const int tracepoint_debug;
+
+/*
+ * tracepoints_mutex nests inside module_mutex. Tracepoints mutex protects the
+ * builtin and module tracepoints and the hash table.
+ */
+static DEFINE_MUTEX(tracepoints_mutex);
+
+/*
+ * Tracepoint hash table, containing the active tracepoints.
+ * Protected by tracepoints_mutex.
+ */
+#define TRACEPOINT_HASH_BITS 6
+#define TRACEPOINT_TABLE_SIZE (1 << TRACEPOINT_HASH_BITS)
+
+/*
+ * Note about RCU :
+ * It is used to to delay the free of multiple probes array until a quiescent
+ * state is reached.
+ * Tracepoint entries modifications are protected by the tracepoints_mutex.
+ */
+struct tracepoint_entry {
+ struct hlist_node hlist;
+ void **funcs;
+ int refcount; /* Number of times armed. 0 if disarmed. */
+ struct rcu_head rcu;
+ void *oldptr;
+ unsigned char rcu_pending:1;
+ char name[0];
+};
+
+static struct hlist_head tracepoint_table[TRACEPOINT_TABLE_SIZE];
+
+static void free_old_closure(struct rcu_head *head)
+{
+ struct tracepoint_entry *entry = container_of(head,
+ struct tracepoint_entry, rcu);
+ kfree(entry->oldptr);
+ /* Make sure we free the data before setting the pending flag to 0 */
+ smp_wmb();
+ entry->rcu_pending = 0;
+}
+
+static void tracepoint_entry_free_old(struct tracepoint_entry *entry, void *old)
+{
+ if (!old)
+ return;
+ entry->oldptr = old;
+ entry->rcu_pending = 1;
+ /* write rcu_pending before calling the RCU callback */
+ smp_wmb();
+#ifdef CONFIG_PREEMPT_RCU
+ synchronize_sched(); /* Until we have the call_rcu_sched() */
+#endif
+ call_rcu(&entry->rcu, free_old_closure);
+}
+
+static void debug_print_probes(struct tracepoint_entry *entry)
+{
+ int i;
+
+ if (!tracepoint_debug)
+ return;
+
+ for (i = 0; entry->funcs[i]; i++)
+ printk(KERN_DEBUG "Probe %d : %p\n", i, entry->funcs[i]);
+}
+
+static void *
+tracepoint_entry_add_probe(struct tracepoint_entry *entry, void *probe)
+{
+ int nr_probes = 0;
+ void **old, **new;
+
+ WARN_ON(!probe);
+
+ debug_print_probes(entry);
+ old = entry->funcs;
+ if (old) {
+ /* (N -> N+1), (N != 0, 1) probes */
+ for (nr_probes = 0; old[nr_probes]; nr_probes++)
+ if (old[nr_probes] == probe)
+ return ERR_PTR(-EBUSY);
+ }
+ /* + 2 : one for new probe, one for NULL func */
+ new = kzalloc((nr_probes + 2) * sizeof(void *), GFP_KERNEL);
+ if (new == NULL)
+ return ERR_PTR(-ENOMEM);
+ if (old)
+ memcpy(new, old, nr_probes * sizeof(void *));
+ new[nr_probes] = probe;
+ entry->refcount = nr_probes + 1;
+ entry->funcs = new;
+ debug_print_probes(entry);
+ return old;
+}
+
+static void *
+tracepoint_entry_remove_probe(struct tracepoint_entry *entry, void *probe)
+{
+ int nr_probes = 0, nr_del = 0, i;
+ void **old, **new;
+
+ old = entry->funcs;
+
+ debug_print_probes(entry);
+ /* (N -> M), (N > 1, M >= 0) probes */
+ for (nr_probes = 0; old[nr_probes]; nr_probes++) {
+ if ((!probe || old[nr_probes] == probe))
+ nr_del++;
+ }
+
+ if (nr_probes - nr_del == 0) {
+ /* N -> 0, (N > 1) */
+ entry->funcs = NULL;
+ entry->refcount = 0;
+ debug_print_probes(entry);
+ return old;
+ } else {
+ int j = 0;
+ /* N -> M, (N > 1, M > 0) */
+ /* + 1 for NULL */
+ new = kzalloc((nr_probes - nr_del + 1)
+ * sizeof(void *), GFP_KERNEL);
+ if (new == NULL)
+ return ERR_PTR(-ENOMEM);
+ for (i = 0; old[i]; i++)
+ if ((probe && old[i] != probe))
+ new[j++] = old[i];
+ entry->refcount = nr_probes - nr_del;
+ entry->funcs = new;
+ }
+ debug_print_probes(entry);
+ return old;
+}
+
+/*
+ * Get tracepoint if the tracepoint is present in the tracepoint hash table.
+ * Must be called with tracepoints_mutex held.
+ * Returns NULL if not present.
+ */
+static struct tracepoint_entry *get_tracepoint(const char *name)
+{
+ struct hlist_head *head;
+ struct hlist_node *node;
+ struct tracepoint_entry *e;
+ u32 hash = jhash(name, strlen(name), 0);
+
+ head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
+ hlist_for_each_entry(e, node, head, hlist) {
+ if (!strcmp(name, e->name))
+ return e;
+ }
+ return NULL;
+}
+
+/*
+ * Add the tracepoint to the tracepoint hash table. Must be called with
+ * tracepoints_mutex held.
+ */
+static struct tracepoint_entry *add_tracepoint(const char *name)
+{
+ struct hlist_head *head;
+ struct hlist_node *node;
+ struct tracepoint_entry *e;
+ size_t name_len = strlen(name) + 1;
+ u32 hash = jhash(name, name_len-1, 0);
+
+ head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
+ hlist_for_each_entry(e, node, head, hlist) {
+ if (!strcmp(name, e->name)) {
+ printk(KERN_NOTICE
+ "tracepoint %s busy\n", name);
+ return ERR_PTR(-EBUSY); /* Already there */
+ }
+ }
+ /*
+ * Using kmalloc here to allocate a variable length element. Could
+ * cause some memory fragmentation if overused.
+ */
+ e = kmalloc(sizeof(struct tracepoint_entry) + name_len, GFP_KERNEL);
+ if (!e)
+ return ERR_PTR(-ENOMEM);
+ memcpy(&e->name[0], name, name_len);
+ e->funcs = NULL;
+ e->refcount = 0;
+ e->rcu_pending = 0;
+ hlist_add_head(&e->hlist, head);
+ return e;
+}
+
+/*
+ * Remove the tracepoint from the tracepoint hash table. Must be called with
+ * mutex_lock held.
+ */
+static int remove_tracepoint(const char *name)
+{
+ struct hlist_head *head;
+ struct hlist_node *node;
+ struct tracepoint_entry *e;
+ int found = 0;
+ size_t len = strlen(name) + 1;
+ u32 hash = jhash(name, len-1, 0);
+
+ head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
+ hlist_for_each_entry(e, node, head, hlist) {
+ if (!strcmp(name, e->name)) {
+ found = 1;
+ break;
+ }
+ }
+ if (!found)
+ return -ENOENT;
+ if (e->refcount)
+ return -EBUSY;
+ hlist_del(&e->hlist);
+ /* Make sure the call_rcu has been executed */
+ if (e->rcu_pending)
+ rcu_barrier();
+ kfree(e);
+ return 0;
+}
+
+/*
+ * Sets the probe callback corresponding to one tracepoint.
+ */
+static void set_tracepoint(struct tracepoint_entry **entry,
+ struct tracepoint *elem, int active)
+{
+ WARN_ON(strcmp((*entry)->name, elem->name) != 0);
+
+ smp_wmb();
+ /*
+ * We also make sure that the new probe callbacks array is consistent
+ * before setting a pointer to it.
+ */
+ rcu_assign_pointer(elem->funcs, (*entry)->funcs);
+ elem->state = active;
+}
+
+/*
+ * Disable a tracepoint and its probe callback.
+ * Note: only waiting an RCU period after setting elem->call to the empty
+ * function insures that the original callback is not used anymore. This insured
+ * by preempt_disable around the call site.
+ */
+static void disable_tracepoint(struct tracepoint *elem)
+{
+ elem->state = 0;
+}
+
+/**
+ * tracepoint_update_probe_range - Update a probe range
+ * @begin: beginning of the range
+ * @end: end of the range
+ *
+ * Updates the probe callback corresponding to a range of tracepoints.
+ */
+void tracepoint_update_probe_range(struct tracepoint *begin,
+ struct tracepoint *end)
+{
+ struct tracepoint *iter;
+ struct tracepoint_entry *mark_entry;
+
+ mutex_lock(&tracepoints_mutex);
+ for (iter = begin; iter < end; iter++) {
+ mark_entry = get_tracepoint(iter->name);
+ if (mark_entry) {
+ set_tracepoint(&mark_entry, iter,
+ !!mark_entry->refcount);
+ } else {
+ disable_tracepoint(iter);
+ }
+ }
+ mutex_unlock(&tracepoints_mutex);
+}
+
+/*
+ * Update probes, removing the faulty probes.
+ */
+static void tracepoint_update_probes(void)
+{
+ /* Core kernel tracepoints */
+ tracepoint_update_probe_range(__start___tracepoints,
+ __stop___tracepoints);
+ /* tracepoints in modules. */
+ module_update_tracepoints();
+}
+
+/**
+ * tracepoint_probe_register - Connect a probe to a tracepoint
+ * @name: tracepoint name
+ * @probe: probe handler
+ *
+ * Returns 0 if ok, error value on error.
+ * The probe address must at least be aligned on the architecture pointer size.
+ */
+int tracepoint_probe_register(const char *name, void *probe)
+{
+ struct tracepoint_entry *entry;
+ int ret = 0;
+ void *old;
+
+ mutex_lock(&tracepoints_mutex);
+ entry = get_tracepoint(name);
+ if (!entry) {
+ entry = add_tracepoint(name);
+ if (IS_ERR(entry)) {
+ ret = PTR_ERR(entry);
+ goto end;
+ }
+ }
+ /*
+ * If we detect that a call_rcu is pending for this tracepoint,
+ * make sure it's executed now.
+ */
+ if (entry->rcu_pending)
+ rcu_barrier();
+ old = tracepoint_entry_add_probe(entry, probe);
+ if (IS_ERR(old)) {
+ ret = PTR_ERR(old);
+ goto end;
+ }
+ mutex_unlock(&tracepoints_mutex);
+ tracepoint_update_probes(); /* may update entry */
+ mutex_lock(&tracepoints_mutex);
+ entry = get_tracepoint(name);
+ WARN_ON(!entry);
+ tracepoint_entry_free_old(entry, old);
+end:
+ mutex_unlock(&tracepoints_mutex);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(tracepoint_probe_register);
+
+/**
+ * tracepoint_probe_unregister - Disconnect a probe from a tracepoint
+ * @name: tracepoint name
+ * @probe: probe function pointer
+ *
+ * We do not need to call a synchronize_sched to make sure the probes have
+ * finished running before doing a module unload, because the module unload
+ * itself uses stop_machine(), which insures that every preempt disabled section
+ * have finished.
+ */
+int tracepoint_probe_unregister(const char *name, void *probe)
+{
+ struct tracepoint_entry *entry;
+ void *old;
+ int ret = -ENOENT;
+
+ mutex_lock(&tracepoints_mutex);
+ entry = get_tracepoint(name);
+ if (!entry)
+ goto end;
+ if (entry->rcu_pending)
+ rcu_barrier();
+ old = tracepoint_entry_remove_probe(entry, probe);
+ mutex_unlock(&tracepoints_mutex);
+ tracepoint_update_probes(); /* may update entry */
+ mutex_lock(&tracepoints_mutex);
+ entry = get_tracepoint(name);
+ if (!entry)
+ goto end;
+ tracepoint_entry_free_old(entry, old);
+ remove_tracepoint(name); /* Ignore busy error message */
+ ret = 0;
+end:
+ mutex_unlock(&tracepoints_mutex);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
+
+/**
+ * tracepoint_get_iter_range - Get a next tracepoint iterator given a range.
+ * @tracepoint: current tracepoints (in), next tracepoint (out)
+ * @begin: beginning of the range
+ * @end: end of the range
+ *
+ * Returns whether a next tracepoint has been found (1) or not (0).
+ * Will return the first tracepoint in the range if the input tracepoint is
+ * NULL.
+ */
+int tracepoint_get_iter_range(struct tracepoint **tracepoint,
+ struct tracepoint *begin, struct tracepoint *end)
+{
+ if (!*tracepoint && begin != end) {
+ *tracepoint = begin;
+ return 1;
+ }
+ if (*tracepoint >= begin && *tracepoint < end)
+ return 1;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tracepoint_get_iter_range);
+
+static void tracepoint_get_iter(struct tracepoint_iter *iter)
+{
+ int found = 0;
+
+ /* Core kernel tracepoints */
+ if (!iter->module) {
+ found = tracepoint_get_iter_range(&iter->tracepoint,
+ __start___tracepoints, __stop___tracepoints);
+ if (found)
+ goto end;
+ }
+ /* tracepoints in modules. */
+ found = module_get_iter_tracepoints(iter);
+end:
+ if (!found)
+ tracepoint_iter_reset(iter);
+}
+
+void tracepoint_iter_start(struct tracepoint_iter *iter)
+{
+ tracepoint_get_iter(iter);
+}
+EXPORT_SYMBOL_GPL(tracepoint_iter_start);
+
+void tracepoint_iter_next(struct tracepoint_iter *iter)
+{
+ iter->tracepoint++;
+ /*
+ * iter->tracepoint may be invalid because we blindly incremented it.
+ * Make sure it is valid by marshalling on the tracepoints, getting the
+ * tracepoints from following modules if necessary.
+ */
+ tracepoint_get_iter(iter);
+}
+EXPORT_SYMBOL_GPL(tracepoint_iter_next);
+
+void tracepoint_iter_stop(struct tracepoint_iter *iter)
+{
+}
+EXPORT_SYMBOL_GPL(tracepoint_iter_stop);
+
+void tracepoint_iter_reset(struct tracepoint_iter *iter)
+{
+ iter->module = NULL;
+ iter->tracepoint = NULL;
+}
+EXPORT_SYMBOL_GPL(tracepoint_iter_reset);
Index: linux-2.6-lttng/kernel/module.c
===================================================================
--- linux-2.6-lttng.orig/kernel/module.c 2008-07-09 10:55:46.000000000 -0400
+++ linux-2.6-lttng/kernel/module.c 2008-07-09 10:55:58.000000000 -0400
@@ -47,6 +47,7 @@
#include <asm/sections.h>
#include <linux/license.h>
#include <asm/sections.h>
+#include <linux/tracepoint.h>

#if 0
#define DEBUGP printk
@@ -1824,6 +1825,8 @@ static struct module *load_module(void _
#endif
unsigned int markersindex;
unsigned int markersstringsindex;
+ unsigned int tracepointsindex;
+ unsigned int tracepointsstringsindex;
struct module *mod;
long err = 0;
void *percpu = NULL, *ptr = NULL; /* Stops spurious gcc warning */
@@ -2110,6 +2113,9 @@ static struct module *load_module(void _
markersindex = find_sec(hdr, sechdrs, secstrings, "__markers");
markersstringsindex = find_sec(hdr, sechdrs, secstrings,
"__markers_strings");
+ tracepointsindex = find_sec(hdr, sechdrs, secstrings, "__tracepoints");
+ tracepointsstringsindex = find_sec(hdr, sechdrs, secstrings,
+ "__tracepoints_strings");

/* Now do relocations. */
for (i = 1; i < hdr->e_shnum; i++) {
@@ -2137,6 +2143,12 @@ static struct module *load_module(void _
mod->num_markers =
sechdrs[markersindex].sh_size / sizeof(*mod->markers);
#endif
+#ifdef CONFIG_TRACEPOINTS
+ mod->tracepoints = (void *)sechdrs[tracepointsindex].sh_addr;
+ mod->num_tracepoints =
+ sechdrs[tracepointsindex].sh_size / sizeof(*mod->tracepoints);
+#endif
+

/* Find duplicate symbols */
err = verify_export_symbols(mod);
@@ -2155,11 +2167,16 @@ static struct module *load_module(void _

add_kallsyms(mod, sechdrs, symindex, strindex, secstrings);

+ if (!mod->taints) {
#ifdef CONFIG_MARKERS
- if (!mod->taints)
marker_update_probe_range(mod->markers,
mod->markers + mod->num_markers);
#endif
+#ifdef CONFIG_TRACEPOINTS
+ tracepoint_update_probe_range(mod->tracepoints,
+ mod->tracepoints + mod->num_tracepoints);
+#endif
+ }
err = module_finalize(hdr, sechdrs, mod);
if (err < 0)
goto cleanup;
@@ -2710,3 +2727,50 @@ void module_update_markers(void)
mutex_unlock(&module_mutex);
}
#endif
+
+#ifdef CONFIG_TRACEPOINTS
+void module_update_tracepoints(void)
+{
+ struct module *mod;
+
+ mutex_lock(&module_mutex);
+ list_for_each_entry(mod, &modules, list)
+ if (!mod->taints)
+ tracepoint_update_probe_range(mod->tracepoints,
+ mod->tracepoints + mod->num_tracepoints);
+ mutex_unlock(&module_mutex);
+}
+
+/*
+ * Returns 0 if current not found.
+ * Returns 1 if current found.
+ */
+int module_get_iter_tracepoints(struct tracepoint_iter *iter)
+{
+ struct module *iter_mod;
+ int found = 0;
+
+ mutex_lock(&module_mutex);
+ list_for_each_entry(iter_mod, &modules, list) {
+ if (!iter_mod->taints) {
+ /*
+ * Sorted module list
+ */
+ if (iter_mod < iter->module)
+ continue;
+ else if (iter_mod > iter->module)
+ iter->tracepoint = NULL;
+ found = tracepoint_get_iter_range(&iter->tracepoint,
+ iter_mod->tracepoints,
+ iter_mod->tracepoints
+ + iter_mod->num_tracepoints);
+ if (found) {
+ iter->module = iter_mod;
+ break;
+ }
+ }
+ }
+ mutex_unlock(&module_mutex);
+ return found;
+}
+#endif
Index: linux-2.6-lttng/include/linux/module.h
===================================================================
--- linux-2.6-lttng.orig/include/linux/module.h 2008-07-09 10:55:46.000000000 -0400
+++ linux-2.6-lttng/include/linux/module.h 2008-07-09 10:57:22.000000000 -0400
@@ -16,6 +16,7 @@
#include <linux/kobject.h>
#include <linux/moduleparam.h>
#include <linux/marker.h>
+#include <linux/tracepoint.h>
#include <asm/local.h>

#include <asm/module.h>
@@ -331,6 +332,10 @@ struct module
struct marker *markers;
unsigned int num_markers;
#endif
+#ifdef CONFIG_TRACEPOINTS
+ struct tracepoint *tracepoints;
+ unsigned int num_tracepoints;
+#endif

#ifdef CONFIG_MODULE_UNLOAD
/* What modules depend on me? */
@@ -454,6 +459,9 @@ extern void print_modules(void);

extern void module_update_markers(void);

+extern void module_update_tracepoints(void);
+extern int module_get_iter_tracepoints(struct tracepoint_iter *iter);
+
#else /* !CONFIG_MODULES... */
#define EXPORT_SYMBOL(sym)
#define EXPORT_SYMBOL_GPL(sym)
@@ -558,6 +566,15 @@ static inline void module_update_markers
{
}

+static inline void module_update_tracepoints(void)
+{
+}
+
+static inline int module_get_iter_tracepoints(struct tracepoint_iter *iter)
+{
+ return 0;
+}
+
#endif /* CONFIG_MODULES */

struct device_driver;

--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68


2008-07-15 07:50:43

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
> plain text document attachment (tracepoints.patch)
> Implementation of kernel tracepoints. Inspired from the Linux Kernel Markers.
> Allows complete typing verification. No format string required. See the
> tracepoint Documentation and Samples patches for usage examples.

I think the patch description (aka changelog, not to be confused with
the below) could use a lot more attention.. There are a lot of things
going on in this code non of which are mentioned.

I often read changelogs when I try to understand a piece of code, this
one is utterly unfulfilling.

Aside from that, I think the general picture looks good.

I sprinkled some comments in the code below...

> Changelog :
> - Use #name ":" #proto as string to identify the tracepoint in the
> tracepoint table. This will make sure not type mismatch happens due to
> connexion of a probe with the wrong type to a tracepoint declared with
> the same name in a different header.
> - Add tracepoint_entry_free_old.
>
> Masami Hiramatsu <[email protected]> :
> Tested on x86-64.
>
> Performance impact of a tracepoint : same as markers, except that it adds about
> 70 bytes of instructions in an unlikely branch of each instrumented function
> (the for loop, the stack setup and the function call). It currently adds a
> memory read, a test and a conditional branch at the instrumentation site (in the
> hot path). Immediate values will eventually change this into a load immediate,
> test and branch, which removes the memory read which will make the i-cache
> impact smaller (changing the memory read for a load immediate removes 3-4 bytes
> per site on x86_32 (depending on mov prefixes), or 7-8 bytes on x86_64, it also
> saves the d-cache hit).
>
> About the performance impact of tracepoints (which is comparable to markers),
> even without immediate values optimizations, tests done by Hideo Aoki on ia64
> show no regression. His test case was using hackbench on a kernel where
> scheduler instrumentation (about 5 events in code scheduler code) was added.

> Signed-off-by: Mathieu Desnoyers <[email protected]>
> Acked-by: Masami Hiramatsu <[email protected]>
> CC: 'Peter Zijlstra' <[email protected]>
> CC: "Frank Ch. Eigler" <[email protected]>
> CC: 'Ingo Molnar' <[email protected]>
> CC: 'Hideo AOKI' <[email protected]>
> CC: Takashi Nishiie <[email protected]>
> CC: 'Steven Rostedt' <[email protected]>
> CC: Alexander Viro <[email protected]>
> CC: Eduard - Gabriel Munteanu <[email protected]>
> ---
> include/asm-generic/vmlinux.lds.h | 6
> include/linux/module.h | 17 +
> include/linux/tracepoint.h | 123 +++++++++
> init/Kconfig | 7
> kernel/Makefile | 1
> kernel/module.c | 66 +++++
> kernel/tracepoint.c | 474 ++++++++++++++++++++++++++++++++++++++
> 7 files changed, 692 insertions(+), 2 deletions(-)
>
> Index: linux-2.6-lttng/init/Kconfig
> ===================================================================
> --- linux-2.6-lttng.orig/init/Kconfig 2008-07-09 10:55:46.000000000 -0400
> +++ linux-2.6-lttng/init/Kconfig 2008-07-09 10:55:58.000000000 -0400
> @@ -782,6 +782,13 @@ config PROFILING
> Say Y here to enable the extended profiling support mechanisms used
> by profilers such as OProfile.
>
> +config TRACEPOINTS
> + bool "Activate tracepoints"
> + default y
> + help
> + Place an empty function call at each tracepoint site. Can be
> + dynamically changed for a probe function.
> +
> config MARKERS
> bool "Activate markers"
> help
> Index: linux-2.6-lttng/kernel/Makefile
> ===================================================================
> --- linux-2.6-lttng.orig/kernel/Makefile 2008-07-09 10:55:46.000000000 -0400
> +++ linux-2.6-lttng/kernel/Makefile 2008-07-09 10:55:58.000000000 -0400
> @@ -77,6 +77,7 @@ obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
> obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
> obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
> obj-$(CONFIG_MARKERS) += marker.o
> +obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
> obj-$(CONFIG_LATENCYTOP) += latencytop.o
> obj-$(CONFIG_FTRACE) += trace/
> obj-$(CONFIG_TRACING) += trace/
> Index: linux-2.6-lttng/include/linux/tracepoint.h
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6-lttng/include/linux/tracepoint.h 2008-07-09 10:55:58.000000000 -0400
> @@ -0,0 +1,123 @@
> +#ifndef _LINUX_TRACEPOINT_H
> +#define _LINUX_TRACEPOINT_H
> +
> +/*
> + * Kernel Tracepoint API.
> + *
> + * See Documentation/tracepoint.txt.
> + *
> + * (C) Copyright 2008 Mathieu Desnoyers <[email protected]>
> + *
> + * Heavily inspired from the Linux Kernel Markers.
> + *
> + * This file is released under the GPLv2.
> + * See the file COPYING for more details.
> + */
> +
> +#include <linux/types.h>
> +
> +struct module;
> +struct tracepoint;
> +
> +struct tracepoint {
> + const char *name; /* Tracepoint name */
> + int state; /* State. */
> + void **funcs;
> +} __attribute__((aligned(8)));
> +
> +
> +#define TPPROTO(args...) args
> +#define TPARGS(args...) args
> +
> +#ifdef CONFIG_TRACEPOINTS
> +
> +#define __DO_TRACE(tp, proto, args) \
> + do { \
> + int i; \
> + void **funcs; \
> + preempt_disable(); \
> + funcs = (tp)->funcs; \
> + smp_read_barrier_depends(); \
> + if (funcs) { \
> + for (i = 0; funcs[i]; i++) { \

can't you get rid of 'i' and write:

void **func;

preempt_disable();
func = (tp)->funcs;
smp_read_barrier_depends();
for (; func; func++)
((void (*)(proto))func)(args);
preempt_enable();

Also, why is the preempt_disable needed?

> + ((void(*)(proto))(funcs[i]))(args); \
> + } \
> + } \
> + preempt_enable(); \
> + } while (0)
> +
> +/*
> + * Make sure the alignment of the structure in the __tracepoints section will
> + * not add unwanted padding between the beginning of the section and the
> + * structure. Force alignment to the same alignment as the section start.
> + */
> +#define DEFINE_TRACE(name, proto, args) \
> + static inline void trace_##name(proto) \
> + { \
> + static const char __tpstrtab_##name[] \
> + __attribute__((section("__tracepoints_strings"))) \
> + = #name ":" #proto; \
> + static struct tracepoint __tracepoint_##name \
> + __attribute__((section("__tracepoints"), aligned(8))) = \
> + { __tpstrtab_##name, 0, NULL }; \
> + if (unlikely(__tracepoint_##name.state)) \
> + __DO_TRACE(&__tracepoint_##name, \
> + TPPROTO(proto), TPARGS(args)); \
> + } \
> + static inline int register_trace_##name(void (*probe)(proto)) \
> + { \
> + return tracepoint_probe_register(#name ":" #proto, \
> + (void *)probe); \
> + } \
> + static inline void unregister_trace_##name(void (*probe)(proto))\
> + { \
> + tracepoint_probe_unregister(#name ":" #proto, \
> + (void *)probe); \
> + }
> +
> +extern void tracepoint_update_probe_range(struct tracepoint *begin,
> + struct tracepoint *end);
> +
> +#else /* !CONFIG_TRACEPOINTS */
> +#define DEFINE_TRACE(name, proto, args) \
> + static inline void _do_trace_##name(struct tracepoint *tp, proto) \
> + { } \
> + static inline void trace_##name(proto) \
> + { } \
> + static inline int register_trace_##name(void (*probe)(proto)) \
> + { \
> + return -ENOSYS; \
> + } \
> + static inline void unregister_trace_##name(void (*probe)(proto))\
> + { }
> +
> +static inline void tracepoint_update_probe_range(struct tracepoint *begin,
> + struct tracepoint *end)
> +{ }
> +#endif /* CONFIG_TRACEPOINTS */
> +
> +/*
> + * Connect a probe to a tracepoint.
> + * Internal API, should not be used directly.
> + */
> +extern int tracepoint_probe_register(const char *name, void *probe);
> +
> +/*
> + * Disconnect a probe from a tracepoint.
> + * Internal API, should not be used directly.
> + */
> +extern int tracepoint_probe_unregister(const char *name, void *probe);
> +
> +struct tracepoint_iter {
> + struct module *module;
> + struct tracepoint *tracepoint;
> +};
> +
> +extern void tracepoint_iter_start(struct tracepoint_iter *iter);
> +extern void tracepoint_iter_next(struct tracepoint_iter *iter);
> +extern void tracepoint_iter_stop(struct tracepoint_iter *iter);
> +extern void tracepoint_iter_reset(struct tracepoint_iter *iter);
> +extern int tracepoint_get_iter_range(struct tracepoint **tracepoint,
> + struct tracepoint *begin, struct tracepoint *end);
> +
> +#endif
> Index: linux-2.6-lttng/include/asm-generic/vmlinux.lds.h
> ===================================================================
> --- linux-2.6-lttng.orig/include/asm-generic/vmlinux.lds.h 2008-07-09 10:55:46.000000000 -0400
> +++ linux-2.6-lttng/include/asm-generic/vmlinux.lds.h 2008-07-09 10:55:58.000000000 -0400
> @@ -52,7 +52,10 @@
> . = ALIGN(8); \
> VMLINUX_SYMBOL(__start___markers) = .; \
> *(__markers) \
> - VMLINUX_SYMBOL(__stop___markers) = .;
> + VMLINUX_SYMBOL(__stop___markers) = .; \
> + VMLINUX_SYMBOL(__start___tracepoints) = .; \
> + *(__tracepoints) \
> + VMLINUX_SYMBOL(__stop___tracepoints) = .;
>
> #define RO_DATA(align) \
> . = ALIGN((align)); \
> @@ -61,6 +64,7 @@
> *(.rodata) *(.rodata.*) \
> *(__vermagic) /* Kernel version magic */ \
> *(__markers_strings) /* Markers: strings */ \
> + *(__tracepoints_strings)/* Tracepoints: strings */ \
> } \
> \
> .rodata1 : AT(ADDR(.rodata1) - LOAD_OFFSET) { \
> Index: linux-2.6-lttng/kernel/tracepoint.c
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6-lttng/kernel/tracepoint.c 2008-07-09 10:55:58.000000000 -0400
> @@ -0,0 +1,474 @@
> +/*
> + * Copyright (C) 2008 Mathieu Desnoyers
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
> + */
> +#include <linux/module.h>
> +#include <linux/mutex.h>
> +#include <linux/types.h>
> +#include <linux/jhash.h>
> +#include <linux/list.h>
> +#include <linux/rcupdate.h>
> +#include <linux/tracepoint.h>
> +#include <linux/err.h>
> +#include <linux/slab.h>
> +
> +extern struct tracepoint __start___tracepoints[];
> +extern struct tracepoint __stop___tracepoints[];
> +
> +/* Set to 1 to enable tracepoint debug output */
> +static const int tracepoint_debug;
> +
> +/*
> + * tracepoints_mutex nests inside module_mutex. Tracepoints mutex protects the
> + * builtin and module tracepoints and the hash table.
> + */
> +static DEFINE_MUTEX(tracepoints_mutex);
> +
> +/*
> + * Tracepoint hash table, containing the active tracepoints.
> + * Protected by tracepoints_mutex.
> + */
> +#define TRACEPOINT_HASH_BITS 6
> +#define TRACEPOINT_TABLE_SIZE (1 << TRACEPOINT_HASH_BITS)
> +
> +/*
> + * Note about RCU :
> + * It is used to to delay the free of multiple probes array until a quiescent
> + * state is reached.
> + * Tracepoint entries modifications are protected by the tracepoints_mutex.
> + */
> +struct tracepoint_entry {
> + struct hlist_node hlist;
> + void **funcs;
> + int refcount; /* Number of times armed. 0 if disarmed. */
> + struct rcu_head rcu;
> + void *oldptr;
> + unsigned char rcu_pending:1;
> + char name[0];
> +};
> +
> +static struct hlist_head tracepoint_table[TRACEPOINT_TABLE_SIZE];
> +
> +static void free_old_closure(struct rcu_head *head)
> +{
> + struct tracepoint_entry *entry = container_of(head,
> + struct tracepoint_entry, rcu);
> + kfree(entry->oldptr);
> + /* Make sure we free the data before setting the pending flag to 0 */
> + smp_wmb();
> + entry->rcu_pending = 0;
> +}
> +
> +static void tracepoint_entry_free_old(struct tracepoint_entry *entry, void *old)
> +{
> + if (!old)
> + return;
> + entry->oldptr = old;
> + entry->rcu_pending = 1;
> + /* write rcu_pending before calling the RCU callback */
> + smp_wmb();
> +#ifdef CONFIG_PREEMPT_RCU
> + synchronize_sched(); /* Until we have the call_rcu_sched() */
> +#endif

Does this have something to do with the preempt_disable above?

> + call_rcu(&entry->rcu, free_old_closure);
> +}
> +
> +static void debug_print_probes(struct tracepoint_entry *entry)
> +{
> + int i;
> +
> + if (!tracepoint_debug)
> + return;
> +
> + for (i = 0; entry->funcs[i]; i++)
> + printk(KERN_DEBUG "Probe %d : %p\n", i, entry->funcs[i]);
> +}
> +
> +static void *
> +tracepoint_entry_add_probe(struct tracepoint_entry *entry, void *probe)
> +{
> + int nr_probes = 0;
> + void **old, **new;
> +
> + WARN_ON(!probe);
> +
> + debug_print_probes(entry);
> + old = entry->funcs;
> + if (old) {
> + /* (N -> N+1), (N != 0, 1) probes */
> + for (nr_probes = 0; old[nr_probes]; nr_probes++)
> + if (old[nr_probes] == probe)
> + return ERR_PTR(-EBUSY);

-EEXIST ?

> + }
> + /* + 2 : one for new probe, one for NULL func */
> + new = kzalloc((nr_probes + 2) * sizeof(void *), GFP_KERNEL);
> + if (new == NULL)
> + return ERR_PTR(-ENOMEM);
> + if (old)
> + memcpy(new, old, nr_probes * sizeof(void *));
> + new[nr_probes] = probe;
> + entry->refcount = nr_probes + 1;
> + entry->funcs = new;
> + debug_print_probes(entry);
> + return old;
> +}
> +
> +static void *
> +tracepoint_entry_remove_probe(struct tracepoint_entry *entry, void *probe)
> +{
> + int nr_probes = 0, nr_del = 0, i;
> + void **old, **new;
> +
> + old = entry->funcs;
> +
> + debug_print_probes(entry);
> + /* (N -> M), (N > 1, M >= 0) probes */
> + for (nr_probes = 0; old[nr_probes]; nr_probes++) {
> + if ((!probe || old[nr_probes] == probe))
> + nr_del++;
> + }
> +
> + if (nr_probes - nr_del == 0) {
> + /* N -> 0, (N > 1) */
> + entry->funcs = NULL;
> + entry->refcount = 0;
> + debug_print_probes(entry);
> + return old;
> + } else {
> + int j = 0;
> + /* N -> M, (N > 1, M > 0) */
> + /* + 1 for NULL */
> + new = kzalloc((nr_probes - nr_del + 1)
> + * sizeof(void *), GFP_KERNEL);
> + if (new == NULL)
> + return ERR_PTR(-ENOMEM);
> + for (i = 0; old[i]; i++)
> + if ((probe && old[i] != probe))
> + new[j++] = old[i];
> + entry->refcount = nr_probes - nr_del;
> + entry->funcs = new;
> + }
> + debug_print_probes(entry);
> + return old;
> +}
> +
> +/*
> + * Get tracepoint if the tracepoint is present in the tracepoint hash table.
> + * Must be called with tracepoints_mutex held.
> + * Returns NULL if not present.
> + */
> +static struct tracepoint_entry *get_tracepoint(const char *name)
> +{
> + struct hlist_head *head;
> + struct hlist_node *node;
> + struct tracepoint_entry *e;
> + u32 hash = jhash(name, strlen(name), 0);
> +
> + head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
> + hlist_for_each_entry(e, node, head, hlist) {
> + if (!strcmp(name, e->name))
> + return e;
> + }
> + return NULL;
> +}
> +
> +/*
> + * Add the tracepoint to the tracepoint hash table. Must be called with
> + * tracepoints_mutex held.
> + */
> +static struct tracepoint_entry *add_tracepoint(const char *name)
> +{
> + struct hlist_head *head;
> + struct hlist_node *node;
> + struct tracepoint_entry *e;
> + size_t name_len = strlen(name) + 1;
> + u32 hash = jhash(name, name_len-1, 0);
> +
> + head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
> + hlist_for_each_entry(e, node, head, hlist) {
> + if (!strcmp(name, e->name)) {
> + printk(KERN_NOTICE
> + "tracepoint %s busy\n", name);
> + return ERR_PTR(-EBUSY); /* Already there */

-EEXIST

> + }
> + }
> + /*
> + * Using kmalloc here to allocate a variable length element. Could
> + * cause some memory fragmentation if overused.
> + */
> + e = kmalloc(sizeof(struct tracepoint_entry) + name_len, GFP_KERNEL);
> + if (!e)
> + return ERR_PTR(-ENOMEM);
> + memcpy(&e->name[0], name, name_len);
> + e->funcs = NULL;
> + e->refcount = 0;
> + e->rcu_pending = 0;
> + hlist_add_head(&e->hlist, head);
> + return e;
> +}
> +
> +/*
> + * Remove the tracepoint from the tracepoint hash table. Must be called with
> + * mutex_lock held.
> + */
> +static int remove_tracepoint(const char *name)
> +{
> + struct hlist_head *head;
> + struct hlist_node *node;
> + struct tracepoint_entry *e;
> + int found = 0;
> + size_t len = strlen(name) + 1;
> + u32 hash = jhash(name, len-1, 0);
> +
> + head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
> + hlist_for_each_entry(e, node, head, hlist) {
> + if (!strcmp(name, e->name)) {
> + found = 1;
> + break;
> + }
> + }
> + if (!found)
> + return -ENOENT;
> + if (e->refcount)
> + return -EBUSY;

ok, this really is busy.

> + hlist_del(&e->hlist);
> + /* Make sure the call_rcu has been executed */
> + if (e->rcu_pending)
> + rcu_barrier();
> + kfree(e);
> + return 0;
> +}
> +
> +/*
> + * Sets the probe callback corresponding to one tracepoint.
> + */
> +static void set_tracepoint(struct tracepoint_entry **entry,
> + struct tracepoint *elem, int active)
> +{
> + WARN_ON(strcmp((*entry)->name, elem->name) != 0);
> +
> + smp_wmb();
> + /*
> + * We also make sure that the new probe callbacks array is consistent
> + * before setting a pointer to it.
> + */
> + rcu_assign_pointer(elem->funcs, (*entry)->funcs);

rcu_assign_pointer() already does that wmb !?
Also, its polite to reference the pairing site in the barrier comment.

> + elem->state = active;
> +}
> +
> +/*
> + * Disable a tracepoint and its probe callback.
> + * Note: only waiting an RCU period after setting elem->call to the empty
> + * function insures that the original callback is not used anymore. This insured
> + * by preempt_disable around the call site.
> + */
> +static void disable_tracepoint(struct tracepoint *elem)
> +{
> + elem->state = 0;
> +}
> +
> +/**
> + * tracepoint_update_probe_range - Update a probe range
> + * @begin: beginning of the range
> + * @end: end of the range
> + *
> + * Updates the probe callback corresponding to a range of tracepoints.
> + */
> +void tracepoint_update_probe_range(struct tracepoint *begin,
> + struct tracepoint *end)
> +{
> + struct tracepoint *iter;
> + struct tracepoint_entry *mark_entry;
> +
> + mutex_lock(&tracepoints_mutex);
> + for (iter = begin; iter < end; iter++) {
> + mark_entry = get_tracepoint(iter->name);
> + if (mark_entry) {
> + set_tracepoint(&mark_entry, iter,
> + !!mark_entry->refcount);
> + } else {
> + disable_tracepoint(iter);
> + }
> + }
> + mutex_unlock(&tracepoints_mutex);
> +}
> +
> +/*
> + * Update probes, removing the faulty probes.
> + */
> +static void tracepoint_update_probes(void)
> +{
> + /* Core kernel tracepoints */
> + tracepoint_update_probe_range(__start___tracepoints,
> + __stop___tracepoints);
> + /* tracepoints in modules. */
> + module_update_tracepoints();
> +}
> +
> +/**
> + * tracepoint_probe_register - Connect a probe to a tracepoint
> + * @name: tracepoint name
> + * @probe: probe handler
> + *
> + * Returns 0 if ok, error value on error.
> + * The probe address must at least be aligned on the architecture pointer size.
> + */
> +int tracepoint_probe_register(const char *name, void *probe)
> +{
> + struct tracepoint_entry *entry;
> + int ret = 0;
> + void *old;
> +
> + mutex_lock(&tracepoints_mutex);
> + entry = get_tracepoint(name);
> + if (!entry) {
> + entry = add_tracepoint(name);
> + if (IS_ERR(entry)) {
> + ret = PTR_ERR(entry);
> + goto end;
> + }
> + }
> + /*
> + * If we detect that a call_rcu is pending for this tracepoint,
> + * make sure it's executed now.
> + */
> + if (entry->rcu_pending)
> + rcu_barrier();
> + old = tracepoint_entry_add_probe(entry, probe);
> + if (IS_ERR(old)) {
> + ret = PTR_ERR(old);
> + goto end;
> + }
> + mutex_unlock(&tracepoints_mutex);
> + tracepoint_update_probes(); /* may update entry */
> + mutex_lock(&tracepoints_mutex);
> + entry = get_tracepoint(name);
> + WARN_ON(!entry);
> + tracepoint_entry_free_old(entry, old);
> +end:
> + mutex_unlock(&tracepoints_mutex);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_probe_register);
> +
> +/**
> + * tracepoint_probe_unregister - Disconnect a probe from a tracepoint
> + * @name: tracepoint name
> + * @probe: probe function pointer
> + *
> + * We do not need to call a synchronize_sched to make sure the probes have
> + * finished running before doing a module unload, because the module unload
> + * itself uses stop_machine(), which insures that every preempt disabled section
> + * have finished.
> + */
> +int tracepoint_probe_unregister(const char *name, void *probe)
> +{
> + struct tracepoint_entry *entry;
> + void *old;
> + int ret = -ENOENT;
> +
> + mutex_lock(&tracepoints_mutex);
> + entry = get_tracepoint(name);
> + if (!entry)
> + goto end;
> + if (entry->rcu_pending)
> + rcu_barrier();
> + old = tracepoint_entry_remove_probe(entry, probe);
> + mutex_unlock(&tracepoints_mutex);
> + tracepoint_update_probes(); /* may update entry */
> + mutex_lock(&tracepoints_mutex);
> + entry = get_tracepoint(name);
> + if (!entry)
> + goto end;
> + tracepoint_entry_free_old(entry, old);
> + remove_tracepoint(name); /* Ignore busy error message */
> + ret = 0;
> +end:
> + mutex_unlock(&tracepoints_mutex);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
> +
> +/**
> + * tracepoint_get_iter_range - Get a next tracepoint iterator given a range.
> + * @tracepoint: current tracepoints (in), next tracepoint (out)
> + * @begin: beginning of the range
> + * @end: end of the range
> + *
> + * Returns whether a next tracepoint has been found (1) or not (0).
> + * Will return the first tracepoint in the range if the input tracepoint is
> + * NULL.
> + */
> +int tracepoint_get_iter_range(struct tracepoint **tracepoint,
> + struct tracepoint *begin, struct tracepoint *end)
> +{
> + if (!*tracepoint && begin != end) {
> + *tracepoint = begin;
> + return 1;
> + }
> + if (*tracepoint >= begin && *tracepoint < end)
> + return 1;
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_get_iter_range);
> +
> +static void tracepoint_get_iter(struct tracepoint_iter *iter)
> +{
> + int found = 0;
> +
> + /* Core kernel tracepoints */
> + if (!iter->module) {
> + found = tracepoint_get_iter_range(&iter->tracepoint,
> + __start___tracepoints, __stop___tracepoints);
> + if (found)
> + goto end;
> + }
> + /* tracepoints in modules. */
> + found = module_get_iter_tracepoints(iter);
> +end:
> + if (!found)
> + tracepoint_iter_reset(iter);
> +}
> +
> +void tracepoint_iter_start(struct tracepoint_iter *iter)
> +{
> + tracepoint_get_iter(iter);
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_iter_start);
> +
> +void tracepoint_iter_next(struct tracepoint_iter *iter)
> +{
> + iter->tracepoint++;
> + /*
> + * iter->tracepoint may be invalid because we blindly incremented it.
> + * Make sure it is valid by marshalling on the tracepoints, getting the
> + * tracepoints from following modules if necessary.
> + */
> + tracepoint_get_iter(iter);
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_iter_next);
> +
> +void tracepoint_iter_stop(struct tracepoint_iter *iter)
> +{
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_iter_stop);
> +
> +void tracepoint_iter_reset(struct tracepoint_iter *iter)
> +{
> + iter->module = NULL;
> + iter->tracepoint = NULL;
> +}
> +EXPORT_SYMBOL_GPL(tracepoint_iter_reset);
> Index: linux-2.6-lttng/kernel/module.c
> ===================================================================
> --- linux-2.6-lttng.orig/kernel/module.c 2008-07-09 10:55:46.000000000 -0400
> +++ linux-2.6-lttng/kernel/module.c 2008-07-09 10:55:58.000000000 -0400
> @@ -47,6 +47,7 @@
> #include <asm/sections.h>
> #include <linux/license.h>
> #include <asm/sections.h>
> +#include <linux/tracepoint.h>
>
> #if 0
> #define DEBUGP printk
> @@ -1824,6 +1825,8 @@ static struct module *load_module(void _
> #endif
> unsigned int markersindex;
> unsigned int markersstringsindex;
> + unsigned int tracepointsindex;
> + unsigned int tracepointsstringsindex;
> struct module *mod;
> long err = 0;
> void *percpu = NULL, *ptr = NULL; /* Stops spurious gcc warning */
> @@ -2110,6 +2113,9 @@ static struct module *load_module(void _
> markersindex = find_sec(hdr, sechdrs, secstrings, "__markers");
> markersstringsindex = find_sec(hdr, sechdrs, secstrings,
> "__markers_strings");
> + tracepointsindex = find_sec(hdr, sechdrs, secstrings, "__tracepoints");
> + tracepointsstringsindex = find_sec(hdr, sechdrs, secstrings,
> + "__tracepoints_strings");
>
> /* Now do relocations. */
> for (i = 1; i < hdr->e_shnum; i++) {
> @@ -2137,6 +2143,12 @@ static struct module *load_module(void _
> mod->num_markers =
> sechdrs[markersindex].sh_size / sizeof(*mod->markers);
> #endif
> +#ifdef CONFIG_TRACEPOINTS
> + mod->tracepoints = (void *)sechdrs[tracepointsindex].sh_addr;
> + mod->num_tracepoints =
> + sechdrs[tracepointsindex].sh_size / sizeof(*mod->tracepoints);
> +#endif
> +
>
> /* Find duplicate symbols */
> err = verify_export_symbols(mod);
> @@ -2155,11 +2167,16 @@ static struct module *load_module(void _
>
> add_kallsyms(mod, sechdrs, symindex, strindex, secstrings);
>
> + if (!mod->taints) {
> #ifdef CONFIG_MARKERS
> - if (!mod->taints)
> marker_update_probe_range(mod->markers,
> mod->markers + mod->num_markers);
> #endif
> +#ifdef CONFIG_TRACEPOINTS
> + tracepoint_update_probe_range(mod->tracepoints,
> + mod->tracepoints + mod->num_tracepoints);
> +#endif
> + }
> err = module_finalize(hdr, sechdrs, mod);
> if (err < 0)
> goto cleanup;
> @@ -2710,3 +2727,50 @@ void module_update_markers(void)
> mutex_unlock(&module_mutex);
> }
> #endif
> +
> +#ifdef CONFIG_TRACEPOINTS
> +void module_update_tracepoints(void)
> +{
> + struct module *mod;
> +
> + mutex_lock(&module_mutex);
> + list_for_each_entry(mod, &modules, list)
> + if (!mod->taints)
> + tracepoint_update_probe_range(mod->tracepoints,
> + mod->tracepoints + mod->num_tracepoints);
> + mutex_unlock(&module_mutex);
> +}
> +
> +/*
> + * Returns 0 if current not found.
> + * Returns 1 if current found.
> + */
> +int module_get_iter_tracepoints(struct tracepoint_iter *iter)
> +{
> + struct module *iter_mod;
> + int found = 0;
> +
> + mutex_lock(&module_mutex);
> + list_for_each_entry(iter_mod, &modules, list) {
> + if (!iter_mod->taints) {
> + /*
> + * Sorted module list
> + */
> + if (iter_mod < iter->module)
> + continue;
> + else if (iter_mod > iter->module)
> + iter->tracepoint = NULL;
> + found = tracepoint_get_iter_range(&iter->tracepoint,
> + iter_mod->tracepoints,
> + iter_mod->tracepoints
> + + iter_mod->num_tracepoints);
> + if (found) {
> + iter->module = iter_mod;
> + break;
> + }
> + }
> + }
> + mutex_unlock(&module_mutex);
> + return found;
> +}
> +#endif
> Index: linux-2.6-lttng/include/linux/module.h
> ===================================================================
> --- linux-2.6-lttng.orig/include/linux/module.h 2008-07-09 10:55:46.000000000 -0400
> +++ linux-2.6-lttng/include/linux/module.h 2008-07-09 10:57:22.000000000 -0400
> @@ -16,6 +16,7 @@
> #include <linux/kobject.h>
> #include <linux/moduleparam.h>
> #include <linux/marker.h>
> +#include <linux/tracepoint.h>
> #include <asm/local.h>
>
> #include <asm/module.h>
> @@ -331,6 +332,10 @@ struct module
> struct marker *markers;
> unsigned int num_markers;
> #endif
> +#ifdef CONFIG_TRACEPOINTS
> + struct tracepoint *tracepoints;
> + unsigned int num_tracepoints;
> +#endif
>
> #ifdef CONFIG_MODULE_UNLOAD
> /* What modules depend on me? */
> @@ -454,6 +459,9 @@ extern void print_modules(void);
>
> extern void module_update_markers(void);
>
> +extern void module_update_tracepoints(void);
> +extern int module_get_iter_tracepoints(struct tracepoint_iter *iter);
> +
> #else /* !CONFIG_MODULES... */
> #define EXPORT_SYMBOL(sym)
> #define EXPORT_SYMBOL_GPL(sym)
> @@ -558,6 +566,15 @@ static inline void module_update_markers
> {
> }
>
> +static inline void module_update_tracepoints(void)
> +{
> +}
> +
> +static inline int module_get_iter_tracepoints(struct tracepoint_iter *iter)
> +{
> + return 0;
> +}
> +
> #endif /* CONFIG_MODULES */
>
> struct device_driver;
>

2008-07-15 13:26:00

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
> > plain text document attachment (tracepoints.patch)
> > Implementation of kernel tracepoints. Inspired from the Linux Kernel Markers.
> > Allows complete typing verification. No format string required. See the
> > tracepoint Documentation and Samples patches for usage examples.
>

Hi Peter,

Thanks for the review,

> I think the patch description (aka changelog, not to be confused with
> the below) could use a lot more attention.. There are a lot of things
> going on in this code non of which are mentioned.
>
> I often read changelogs when I try to understand a piece of code, this
> one is utterly unfulfilling.
>

Yes, given that I started from the marker code as a base, I did not
re-explain everything that wasn't changed from those. I guess it's good
to give more details about this though. I'll address this.

> Aside from that, I think the general picture looks good.
>
> I sprinkled some comments in the code below...
>

Thanks, let's looks at them,

> > Changelog :
> > - Use #name ":" #proto as string to identify the tracepoint in the
> > tracepoint table. This will make sure not type mismatch happens due to
> > connexion of a probe with the wrong type to a tracepoint declared with
> > the same name in a different header.
> > - Add tracepoint_entry_free_old.
> >
> > Masami Hiramatsu <[email protected]> :
> > Tested on x86-64.
> >
> > Performance impact of a tracepoint : same as markers, except that it adds about
> > 70 bytes of instructions in an unlikely branch of each instrumented function
> > (the for loop, the stack setup and the function call). It currently adds a
> > memory read, a test and a conditional branch at the instrumentation site (in the
> > hot path). Immediate values will eventually change this into a load immediate,
> > test and branch, which removes the memory read which will make the i-cache
> > impact smaller (changing the memory read for a load immediate removes 3-4 bytes
> > per site on x86_32 (depending on mov prefixes), or 7-8 bytes on x86_64, it also
> > saves the d-cache hit).
> >
> > About the performance impact of tracepoints (which is comparable to markers),
> > even without immediate values optimizations, tests done by Hideo Aoki on ia64
> > show no regression. His test case was using hackbench on a kernel where
> > scheduler instrumentation (about 5 events in code scheduler code) was added.
>
> > Signed-off-by: Mathieu Desnoyers <[email protected]>
> > Acked-by: Masami Hiramatsu <[email protected]>
> > CC: 'Peter Zijlstra' <[email protected]>
> > CC: "Frank Ch. Eigler" <[email protected]>
> > CC: 'Ingo Molnar' <[email protected]>
> > CC: 'Hideo AOKI' <[email protected]>
> > CC: Takashi Nishiie <[email protected]>
> > CC: 'Steven Rostedt' <[email protected]>
> > CC: Alexander Viro <[email protected]>
> > CC: Eduard - Gabriel Munteanu <[email protected]>
> > ---
> > include/asm-generic/vmlinux.lds.h | 6
> > include/linux/module.h | 17 +
> > include/linux/tracepoint.h | 123 +++++++++
> > init/Kconfig | 7
> > kernel/Makefile | 1
> > kernel/module.c | 66 +++++
> > kernel/tracepoint.c | 474 ++++++++++++++++++++++++++++++++++++++
> > 7 files changed, 692 insertions(+), 2 deletions(-)
> >
> > Index: linux-2.6-lttng/init/Kconfig
> > ===================================================================
> > --- linux-2.6-lttng.orig/init/Kconfig 2008-07-09 10:55:46.000000000 -0400
> > +++ linux-2.6-lttng/init/Kconfig 2008-07-09 10:55:58.000000000 -0400
> > @@ -782,6 +782,13 @@ config PROFILING
> > Say Y here to enable the extended profiling support mechanisms used
> > by profilers such as OProfile.
> >
> > +config TRACEPOINTS
> > + bool "Activate tracepoints"
> > + default y
> > + help
> > + Place an empty function call at each tracepoint site. Can be
> > + dynamically changed for a probe function.
> > +
> > config MARKERS
> > bool "Activate markers"
> > help
> > Index: linux-2.6-lttng/kernel/Makefile
> > ===================================================================
> > --- linux-2.6-lttng.orig/kernel/Makefile 2008-07-09 10:55:46.000000000 -0400
> > +++ linux-2.6-lttng/kernel/Makefile 2008-07-09 10:55:58.000000000 -0400
> > @@ -77,6 +77,7 @@ obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
> > obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
> > obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
> > obj-$(CONFIG_MARKERS) += marker.o
> > +obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
> > obj-$(CONFIG_LATENCYTOP) += latencytop.o
> > obj-$(CONFIG_FTRACE) += trace/
> > obj-$(CONFIG_TRACING) += trace/
> > Index: linux-2.6-lttng/include/linux/tracepoint.h
> > ===================================================================
> > --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> > +++ linux-2.6-lttng/include/linux/tracepoint.h 2008-07-09 10:55:58.000000000 -0400
> > @@ -0,0 +1,123 @@
> > +#ifndef _LINUX_TRACEPOINT_H
> > +#define _LINUX_TRACEPOINT_H
> > +
> > +/*
> > + * Kernel Tracepoint API.
> > + *
> > + * See Documentation/tracepoint.txt.
> > + *
> > + * (C) Copyright 2008 Mathieu Desnoyers <[email protected]>
> > + *
> > + * Heavily inspired from the Linux Kernel Markers.
> > + *
> > + * This file is released under the GPLv2.
> > + * See the file COPYING for more details.
> > + */
> > +
> > +#include <linux/types.h>
> > +
> > +struct module;
> > +struct tracepoint;
> > +
> > +struct tracepoint {
> > + const char *name; /* Tracepoint name */
> > + int state; /* State. */
> > + void **funcs;
> > +} __attribute__((aligned(8)));
> > +
> > +
> > +#define TPPROTO(args...) args
> > +#define TPARGS(args...) args
> > +
> > +#ifdef CONFIG_TRACEPOINTS
> > +
> > +#define __DO_TRACE(tp, proto, args) \
> > + do { \
> > + int i; \
> > + void **funcs; \
> > + preempt_disable(); \
> > + funcs = (tp)->funcs; \
> > + smp_read_barrier_depends(); \
> > + if (funcs) { \
> > + for (i = 0; funcs[i]; i++) { \
>
> can't you get rid of 'i' and write:
>
> void **func;
>
> preempt_disable();
> func = (tp)->funcs;
> smp_read_barrier_depends();
> for (; func; func++)
> ((void (*)(proto))func)(args);
> preempt_enable();
>

Yes, I though there would be an optimization to do here, I'll use your
proposal. This code snippet is especially important since it will
generate instructions near every tracepoint side. Saving a few bytes
becomes important.

Given that (tp)->funcs references an array of function pointers and that
it can be NULL, the if (funcs) test must still be there and we must use

#define __DO_TRACE(tp, proto, args) \
do { \
void *func; \
\
preempt_disable(); \
if ((tp)->funcs) { \
func = rcu_dereference((tp)->funcs); \
for (; func; func++) { \
((void(*)(proto))(func))(args); \
} \
} \
preempt_enable(); \
} while (0)


The resulting assembly is a bit more dense than my previous
implementation, which is good :

On x86_64 :

820: bf 01 00 00 00 mov $0x1,%edi
825: e8 00 00 00 00 callq 82a <thread_return+0x136>
82a: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # 831 <thread_retur
n+0x13d>
831: 48 85 c0 test %rax,%rax
834: 74 22 je 858 <thread_return+0x164>
836: 48 8b 18 mov (%rax),%rbx
839: 48 85 db test %rbx,%rbx
83c: 74 1a je 858 <thread_return+0x164>
83e: 66 90 xchg %ax,%ax
840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
84e: 4c 89 e7 mov %r12,%rdi
851: ff d3 callq *%rbx
853: 48 ff c3 inc %rbx
856: 75 e8 jne 840 <thread_return+0x14c>
858: bf 01 00 00 00 mov $0x1,%edi
85d: e8 00 00 00 00 callq 862 <thread_return+0x16e>
862:

For 66 bytes (a 2 arguments tracepoint). Note that these bytes are
outside of the critical path in an unlikely branch. The branch test
bytes have been discussed thoroughly in the "Immediate Values" work,
which can be seen as an optimisation to be integrated later.

The current branch code added to the critical path is, on x86_64 :

5ff: b8 00 00 00 00 mov $0x0,%eax
604: 85 c0 test %eax,%eax
606: 0f 85 14 02 00 00 jne 820 <thread_return+0x12c>

The immediate values can make this smaller by using a 2-bytes movb
instructions to populate eax, which would save 3 bytes of cache-hot
cachelines.

> Also, why is the preempt_disable needed?
>

Addition and removal of tracepoints is synchronized by RCU using the
scheduler (and preempt_disable) as guarantees to find a quiescent state
(this is really RCU "classic"). The update side uses rcu_barrier_sched()
with call_rcu_sched() and the read/execute side uses
"preempt_disable()/preempt_enable()".


> > + ((void(*)(proto))(funcs[i]))(args); \
> > + } \
> > + } \
> > + preempt_enable(); \
> > + } while (0)
> > +
> > +/*
> > + * Make sure the alignment of the structure in the __tracepoints section will
> > + * not add unwanted padding between the beginning of the section and the
> > + * structure. Force alignment to the same alignment as the section start.
> > + */
> > +#define DEFINE_TRACE(name, proto, args) \
> > + static inline void trace_##name(proto) \
> > + { \
> > + static const char __tpstrtab_##name[] \
> > + __attribute__((section("__tracepoints_strings"))) \
> > + = #name ":" #proto; \
> > + static struct tracepoint __tracepoint_##name \
> > + __attribute__((section("__tracepoints"), aligned(8))) = \
> > + { __tpstrtab_##name, 0, NULL }; \
> > + if (unlikely(__tracepoint_##name.state)) \
> > + __DO_TRACE(&__tracepoint_##name, \
> > + TPPROTO(proto), TPARGS(args)); \
> > + } \
> > + static inline int register_trace_##name(void (*probe)(proto)) \
> > + { \
> > + return tracepoint_probe_register(#name ":" #proto, \
> > + (void *)probe); \
> > + } \
> > + static inline void unregister_trace_##name(void (*probe)(proto))\
> > + { \
> > + tracepoint_probe_unregister(#name ":" #proto, \
> > + (void *)probe); \
> > + }
> > +
> > +extern void tracepoint_update_probe_range(struct tracepoint *begin,
> > + struct tracepoint *end);
> > +
> > +#else /* !CONFIG_TRACEPOINTS */
> > +#define DEFINE_TRACE(name, proto, args) \
> > + static inline void _do_trace_##name(struct tracepoint *tp, proto) \
> > + { } \
> > + static inline void trace_##name(proto) \
> > + { } \
> > + static inline int register_trace_##name(void (*probe)(proto)) \
> > + { \
> > + return -ENOSYS; \
> > + } \
> > + static inline void unregister_trace_##name(void (*probe)(proto))\
> > + { }
> > +
> > +static inline void tracepoint_update_probe_range(struct tracepoint *begin,
> > + struct tracepoint *end)
> > +{ }
> > +#endif /* CONFIG_TRACEPOINTS */
> > +
> > +/*
> > + * Connect a probe to a tracepoint.
> > + * Internal API, should not be used directly.
> > + */
> > +extern int tracepoint_probe_register(const char *name, void *probe);
> > +
> > +/*
> > + * Disconnect a probe from a tracepoint.
> > + * Internal API, should not be used directly.
> > + */
> > +extern int tracepoint_probe_unregister(const char *name, void *probe);
> > +
> > +struct tracepoint_iter {
> > + struct module *module;
> > + struct tracepoint *tracepoint;
> > +};
> > +
> > +extern void tracepoint_iter_start(struct tracepoint_iter *iter);
> > +extern void tracepoint_iter_next(struct tracepoint_iter *iter);
> > +extern void tracepoint_iter_stop(struct tracepoint_iter *iter);
> > +extern void tracepoint_iter_reset(struct tracepoint_iter *iter);
> > +extern int tracepoint_get_iter_range(struct tracepoint **tracepoint,
> > + struct tracepoint *begin, struct tracepoint *end);
> > +
> > +#endif
> > Index: linux-2.6-lttng/include/asm-generic/vmlinux.lds.h
> > ===================================================================
> > --- linux-2.6-lttng.orig/include/asm-generic/vmlinux.lds.h 2008-07-09 10:55:46.000000000 -0400
> > +++ linux-2.6-lttng/include/asm-generic/vmlinux.lds.h 2008-07-09 10:55:58.000000000 -0400
> > @@ -52,7 +52,10 @@
> > . = ALIGN(8); \
> > VMLINUX_SYMBOL(__start___markers) = .; \
> > *(__markers) \
> > - VMLINUX_SYMBOL(__stop___markers) = .;
> > + VMLINUX_SYMBOL(__stop___markers) = .; \
> > + VMLINUX_SYMBOL(__start___tracepoints) = .; \
> > + *(__tracepoints) \
> > + VMLINUX_SYMBOL(__stop___tracepoints) = .;
> >
> > #define RO_DATA(align) \
> > . = ALIGN((align)); \
> > @@ -61,6 +64,7 @@
> > *(.rodata) *(.rodata.*) \
> > *(__vermagic) /* Kernel version magic */ \
> > *(__markers_strings) /* Markers: strings */ \
> > + *(__tracepoints_strings)/* Tracepoints: strings */ \
> > } \
> > \
> > .rodata1 : AT(ADDR(.rodata1) - LOAD_OFFSET) { \
> > Index: linux-2.6-lttng/kernel/tracepoint.c
> > ===================================================================
> > --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> > +++ linux-2.6-lttng/kernel/tracepoint.c 2008-07-09 10:55:58.000000000 -0400
> > @@ -0,0 +1,474 @@
> > +/*
> > + * Copyright (C) 2008 Mathieu Desnoyers
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; if not, write to the Free Software
> > + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
> > + */
> > +#include <linux/module.h>
> > +#include <linux/mutex.h>
> > +#include <linux/types.h>
> > +#include <linux/jhash.h>
> > +#include <linux/list.h>
> > +#include <linux/rcupdate.h>
> > +#include <linux/tracepoint.h>
> > +#include <linux/err.h>
> > +#include <linux/slab.h>
> > +
> > +extern struct tracepoint __start___tracepoints[];
> > +extern struct tracepoint __stop___tracepoints[];
> > +
> > +/* Set to 1 to enable tracepoint debug output */
> > +static const int tracepoint_debug;
> > +
> > +/*
> > + * tracepoints_mutex nests inside module_mutex. Tracepoints mutex protects the
> > + * builtin and module tracepoints and the hash table.
> > + */
> > +static DEFINE_MUTEX(tracepoints_mutex);
> > +
> > +/*
> > + * Tracepoint hash table, containing the active tracepoints.
> > + * Protected by tracepoints_mutex.
> > + */
> > +#define TRACEPOINT_HASH_BITS 6
> > +#define TRACEPOINT_TABLE_SIZE (1 << TRACEPOINT_HASH_BITS)
> > +
> > +/*
> > + * Note about RCU :
> > + * It is used to to delay the free of multiple probes array until a quiescent
> > + * state is reached.
> > + * Tracepoint entries modifications are protected by the tracepoints_mutex.
> > + */
> > +struct tracepoint_entry {
> > + struct hlist_node hlist;
> > + void **funcs;
> > + int refcount; /* Number of times armed. 0 if disarmed. */
> > + struct rcu_head rcu;
> > + void *oldptr;
> > + unsigned char rcu_pending:1;
> > + char name[0];
> > +};
> > +
> > +static struct hlist_head tracepoint_table[TRACEPOINT_TABLE_SIZE];
> > +
> > +static void free_old_closure(struct rcu_head *head)
> > +{
> > + struct tracepoint_entry *entry = container_of(head,
> > + struct tracepoint_entry, rcu);
> > + kfree(entry->oldptr);
> > + /* Make sure we free the data before setting the pending flag to 0 */
> > + smp_wmb();
> > + entry->rcu_pending = 0;
> > +}
> > +
> > +static void tracepoint_entry_free_old(struct tracepoint_entry *entry, void *old)
> > +{
> > + if (!old)
> > + return;
> > + entry->oldptr = old;
> > + entry->rcu_pending = 1;
> > + /* write rcu_pending before calling the RCU callback */
> > + smp_wmb();
> > +#ifdef CONFIG_PREEMPT_RCU
> > + synchronize_sched(); /* Until we have the call_rcu_sched() */
> > +#endif
>
> Does this have something to do with the preempt_disable above?
>

Yes, it does. We make sure the previous array containing probes, which
has been scheduled for deletion by the rcu callback, is indeed freed
before we proceed to the next update. It therefore limits the rate of
modification of a single tracepoint to one update per RCU period. The
objective here is to permit fast batch add/removal of probes on
_different_ tracepoints.

This use of "synchronize_sched()" can be changed for call_rcu_sched() in
linux-next, I'll fix this.


> > + call_rcu(&entry->rcu, free_old_closure);
> > +}
> > +
> > +static void debug_print_probes(struct tracepoint_entry *entry)
> > +{
> > + int i;
> > +
> > + if (!tracepoint_debug)
> > + return;
> > +
> > + for (i = 0; entry->funcs[i]; i++)
> > + printk(KERN_DEBUG "Probe %d : %p\n", i, entry->funcs[i]);
> > +}
> > +
> > +static void *
> > +tracepoint_entry_add_probe(struct tracepoint_entry *entry, void *probe)
> > +{
> > + int nr_probes = 0;
> > + void **old, **new;
> > +
> > + WARN_ON(!probe);
> > +
> > + debug_print_probes(entry);
> > + old = entry->funcs;
> > + if (old) {
> > + /* (N -> N+1), (N != 0, 1) probes */
> > + for (nr_probes = 0; old[nr_probes]; nr_probes++)
> > + if (old[nr_probes] == probe)
> > + return ERR_PTR(-EBUSY);
>
> -EEXIST ?
>

good point.

> > + }
> > + /* + 2 : one for new probe, one for NULL func */
> > + new = kzalloc((nr_probes + 2) * sizeof(void *), GFP_KERNEL);
> > + if (new == NULL)
> > + return ERR_PTR(-ENOMEM);
> > + if (old)
> > + memcpy(new, old, nr_probes * sizeof(void *));
> > + new[nr_probes] = probe;
> > + entry->refcount = nr_probes + 1;
> > + entry->funcs = new;
> > + debug_print_probes(entry);
> > + return old;
> > +}
> > +
> > +static void *
> > +tracepoint_entry_remove_probe(struct tracepoint_entry *entry, void *probe)
> > +{
> > + int nr_probes = 0, nr_del = 0, i;
> > + void **old, **new;
> > +
> > + old = entry->funcs;
> > +
> > + debug_print_probes(entry);
> > + /* (N -> M), (N > 1, M >= 0) probes */
> > + for (nr_probes = 0; old[nr_probes]; nr_probes++) {
> > + if ((!probe || old[nr_probes] == probe))
> > + nr_del++;
> > + }
> > +
> > + if (nr_probes - nr_del == 0) {
> > + /* N -> 0, (N > 1) */
> > + entry->funcs = NULL;
> > + entry->refcount = 0;
> > + debug_print_probes(entry);
> > + return old;
> > + } else {
> > + int j = 0;
> > + /* N -> M, (N > 1, M > 0) */
> > + /* + 1 for NULL */
> > + new = kzalloc((nr_probes - nr_del + 1)
> > + * sizeof(void *), GFP_KERNEL);
> > + if (new == NULL)
> > + return ERR_PTR(-ENOMEM);
> > + for (i = 0; old[i]; i++)
> > + if ((probe && old[i] != probe))
> > + new[j++] = old[i];
> > + entry->refcount = nr_probes - nr_del;
> > + entry->funcs = new;
> > + }
> > + debug_print_probes(entry);
> > + return old;
> > +}
> > +
> > +/*
> > + * Get tracepoint if the tracepoint is present in the tracepoint hash table.
> > + * Must be called with tracepoints_mutex held.
> > + * Returns NULL if not present.
> > + */
> > +static struct tracepoint_entry *get_tracepoint(const char *name)
> > +{
> > + struct hlist_head *head;
> > + struct hlist_node *node;
> > + struct tracepoint_entry *e;
> > + u32 hash = jhash(name, strlen(name), 0);
> > +
> > + head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
> > + hlist_for_each_entry(e, node, head, hlist) {
> > + if (!strcmp(name, e->name))
> > + return e;
> > + }
> > + return NULL;
> > +}
> > +
> > +/*
> > + * Add the tracepoint to the tracepoint hash table. Must be called with
> > + * tracepoints_mutex held.
> > + */
> > +static struct tracepoint_entry *add_tracepoint(const char *name)
> > +{
> > + struct hlist_head *head;
> > + struct hlist_node *node;
> > + struct tracepoint_entry *e;
> > + size_t name_len = strlen(name) + 1;
> > + u32 hash = jhash(name, name_len-1, 0);
> > +
> > + head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
> > + hlist_for_each_entry(e, node, head, hlist) {
> > + if (!strcmp(name, e->name)) {
> > + printk(KERN_NOTICE
> > + "tracepoint %s busy\n", name);
> > + return ERR_PTR(-EBUSY); /* Already there */
>
> -EEXIST
>

Yes.

> > + }
> > + }
> > + /*
> > + * Using kmalloc here to allocate a variable length element. Could
> > + * cause some memory fragmentation if overused.
> > + */
> > + e = kmalloc(sizeof(struct tracepoint_entry) + name_len, GFP_KERNEL);
> > + if (!e)
> > + return ERR_PTR(-ENOMEM);
> > + memcpy(&e->name[0], name, name_len);
> > + e->funcs = NULL;
> > + e->refcount = 0;
> > + e->rcu_pending = 0;
> > + hlist_add_head(&e->hlist, head);
> > + return e;
> > +}
> > +
> > +/*
> > + * Remove the tracepoint from the tracepoint hash table. Must be called with
> > + * mutex_lock held.
> > + */
> > +static int remove_tracepoint(const char *name)
> > +{
> > + struct hlist_head *head;
> > + struct hlist_node *node;
> > + struct tracepoint_entry *e;
> > + int found = 0;
> > + size_t len = strlen(name) + 1;
> > + u32 hash = jhash(name, len-1, 0);
> > +
> > + head = &tracepoint_table[hash & ((1 << TRACEPOINT_HASH_BITS)-1)];
> > + hlist_for_each_entry(e, node, head, hlist) {
> > + if (!strcmp(name, e->name)) {
> > + found = 1;
> > + break;
> > + }
> > + }
> > + if (!found)
> > + return -ENOENT;
> > + if (e->refcount)
> > + return -EBUSY;
>
> ok, this really is busy.
>
> > + hlist_del(&e->hlist);
> > + /* Make sure the call_rcu has been executed */
> > + if (e->rcu_pending)
> > + rcu_barrier();
> > + kfree(e);
> > + return 0;
> > +}
> > +
> > +/*
> > + * Sets the probe callback corresponding to one tracepoint.
> > + */
> > +static void set_tracepoint(struct tracepoint_entry **entry,
> > + struct tracepoint *elem, int active)
> > +{
> > + WARN_ON(strcmp((*entry)->name, elem->name) != 0);
> > +
> > + smp_wmb();
> > + /*
> > + * We also make sure that the new probe callbacks array is consistent
> > + * before setting a pointer to it.
> > + */
> > + rcu_assign_pointer(elem->funcs, (*entry)->funcs);
>
> rcu_assign_pointer() already does that wmb !?
> Also, its polite to reference the pairing site in the barrier comment.
>

Good point. I'll then remove the wmb, and change the
smp_read_barrier_depends for a rcu_dereference in __DO_TRACE. It will
make things clearer.

The comments becomes :

/*
* rcu_assign_pointer has a smp_wmb() which makes sure that the new
* probe callbacks array is consistent before setting a pointer to it.
* This array is referenced by __DO_TRACE from
* include/linux/tracepoints.h. A matching rcu_dereference() is used.
*/

I'll release a new version including those changes shortly,

Thanks,

Mathieu

> > + elem->state = active;
> > +}
> > +
> > +/*
> > + * Disable a tracepoint and its probe callback.
> > + * Note: only waiting an RCU period after setting elem->call to the empty
> > + * function insures that the original callback is not used anymore. This insured
> > + * by preempt_disable around the call site.
> > + */
> > +static void disable_tracepoint(struct tracepoint *elem)
> > +{
> > + elem->state = 0;
> > +}
> > +
> > +/**
> > + * tracepoint_update_probe_range - Update a probe range
> > + * @begin: beginning of the range
> > + * @end: end of the range
> > + *
> > + * Updates the probe callback corresponding to a range of tracepoints.
> > + */
> > +void tracepoint_update_probe_range(struct tracepoint *begin,
> > + struct tracepoint *end)
> > +{
> > + struct tracepoint *iter;
> > + struct tracepoint_entry *mark_entry;
> > +
> > + mutex_lock(&tracepoints_mutex);
> > + for (iter = begin; iter < end; iter++) {
> > + mark_entry = get_tracepoint(iter->name);
> > + if (mark_entry) {
> > + set_tracepoint(&mark_entry, iter,
> > + !!mark_entry->refcount);
> > + } else {
> > + disable_tracepoint(iter);
> > + }
> > + }
> > + mutex_unlock(&tracepoints_mutex);
> > +}
> > +
> > +/*
> > + * Update probes, removing the faulty probes.
> > + */
> > +static void tracepoint_update_probes(void)
> > +{
> > + /* Core kernel tracepoints */
> > + tracepoint_update_probe_range(__start___tracepoints,
> > + __stop___tracepoints);
> > + /* tracepoints in modules. */
> > + module_update_tracepoints();
> > +}
> > +
> > +/**
> > + * tracepoint_probe_register - Connect a probe to a tracepoint
> > + * @name: tracepoint name
> > + * @probe: probe handler
> > + *
> > + * Returns 0 if ok, error value on error.
> > + * The probe address must at least be aligned on the architecture pointer size.
> > + */
> > +int tracepoint_probe_register(const char *name, void *probe)
> > +{
> > + struct tracepoint_entry *entry;
> > + int ret = 0;
> > + void *old;
> > +
> > + mutex_lock(&tracepoints_mutex);
> > + entry = get_tracepoint(name);
> > + if (!entry) {
> > + entry = add_tracepoint(name);
> > + if (IS_ERR(entry)) {
> > + ret = PTR_ERR(entry);
> > + goto end;
> > + }
> > + }
> > + /*
> > + * If we detect that a call_rcu is pending for this tracepoint,
> > + * make sure it's executed now.
> > + */
> > + if (entry->rcu_pending)
> > + rcu_barrier();
> > + old = tracepoint_entry_add_probe(entry, probe);
> > + if (IS_ERR(old)) {
> > + ret = PTR_ERR(old);
> > + goto end;
> > + }
> > + mutex_unlock(&tracepoints_mutex);
> > + tracepoint_update_probes(); /* may update entry */
> > + mutex_lock(&tracepoints_mutex);
> > + entry = get_tracepoint(name);
> > + WARN_ON(!entry);
> > + tracepoint_entry_free_old(entry, old);
> > +end:
> > + mutex_unlock(&tracepoints_mutex);
> > + return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_probe_register);
> > +
> > +/**
> > + * tracepoint_probe_unregister - Disconnect a probe from a tracepoint
> > + * @name: tracepoint name
> > + * @probe: probe function pointer
> > + *
> > + * We do not need to call a synchronize_sched to make sure the probes have
> > + * finished running before doing a module unload, because the module unload
> > + * itself uses stop_machine(), which insures that every preempt disabled section
> > + * have finished.
> > + */
> > +int tracepoint_probe_unregister(const char *name, void *probe)
> > +{
> > + struct tracepoint_entry *entry;
> > + void *old;
> > + int ret = -ENOENT;
> > +
> > + mutex_lock(&tracepoints_mutex);
> > + entry = get_tracepoint(name);
> > + if (!entry)
> > + goto end;
> > + if (entry->rcu_pending)
> > + rcu_barrier();
> > + old = tracepoint_entry_remove_probe(entry, probe);
> > + mutex_unlock(&tracepoints_mutex);
> > + tracepoint_update_probes(); /* may update entry */
> > + mutex_lock(&tracepoints_mutex);
> > + entry = get_tracepoint(name);
> > + if (!entry)
> > + goto end;
> > + tracepoint_entry_free_old(entry, old);
> > + remove_tracepoint(name); /* Ignore busy error message */
> > + ret = 0;
> > +end:
> > + mutex_unlock(&tracepoints_mutex);
> > + return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
> > +
> > +/**
> > + * tracepoint_get_iter_range - Get a next tracepoint iterator given a range.
> > + * @tracepoint: current tracepoints (in), next tracepoint (out)
> > + * @begin: beginning of the range
> > + * @end: end of the range
> > + *
> > + * Returns whether a next tracepoint has been found (1) or not (0).
> > + * Will return the first tracepoint in the range if the input tracepoint is
> > + * NULL.
> > + */
> > +int tracepoint_get_iter_range(struct tracepoint **tracepoint,
> > + struct tracepoint *begin, struct tracepoint *end)
> > +{
> > + if (!*tracepoint && begin != end) {
> > + *tracepoint = begin;
> > + return 1;
> > + }
> > + if (*tracepoint >= begin && *tracepoint < end)
> > + return 1;
> > + return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_get_iter_range);
> > +
> > +static void tracepoint_get_iter(struct tracepoint_iter *iter)
> > +{
> > + int found = 0;
> > +
> > + /* Core kernel tracepoints */
> > + if (!iter->module) {
> > + found = tracepoint_get_iter_range(&iter->tracepoint,
> > + __start___tracepoints, __stop___tracepoints);
> > + if (found)
> > + goto end;
> > + }
> > + /* tracepoints in modules. */
> > + found = module_get_iter_tracepoints(iter);
> > +end:
> > + if (!found)
> > + tracepoint_iter_reset(iter);
> > +}
> > +
> > +void tracepoint_iter_start(struct tracepoint_iter *iter)
> > +{
> > + tracepoint_get_iter(iter);
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_iter_start);
> > +
> > +void tracepoint_iter_next(struct tracepoint_iter *iter)
> > +{
> > + iter->tracepoint++;
> > + /*
> > + * iter->tracepoint may be invalid because we blindly incremented it.
> > + * Make sure it is valid by marshalling on the tracepoints, getting the
> > + * tracepoints from following modules if necessary.
> > + */
> > + tracepoint_get_iter(iter);
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_iter_next);
> > +
> > +void tracepoint_iter_stop(struct tracepoint_iter *iter)
> > +{
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_iter_stop);
> > +
> > +void tracepoint_iter_reset(struct tracepoint_iter *iter)
> > +{
> > + iter->module = NULL;
> > + iter->tracepoint = NULL;
> > +}
> > +EXPORT_SYMBOL_GPL(tracepoint_iter_reset);
> > Index: linux-2.6-lttng/kernel/module.c
> > ===================================================================
> > --- linux-2.6-lttng.orig/kernel/module.c 2008-07-09 10:55:46.000000000 -0400
> > +++ linux-2.6-lttng/kernel/module.c 2008-07-09 10:55:58.000000000 -0400
> > @@ -47,6 +47,7 @@
> > #include <asm/sections.h>
> > #include <linux/license.h>
> > #include <asm/sections.h>
> > +#include <linux/tracepoint.h>
> >
> > #if 0
> > #define DEBUGP printk
> > @@ -1824,6 +1825,8 @@ static struct module *load_module(void _
> > #endif
> > unsigned int markersindex;
> > unsigned int markersstringsindex;
> > + unsigned int tracepointsindex;
> > + unsigned int tracepointsstringsindex;
> > struct module *mod;
> > long err = 0;
> > void *percpu = NULL, *ptr = NULL; /* Stops spurious gcc warning */
> > @@ -2110,6 +2113,9 @@ static struct module *load_module(void _
> > markersindex = find_sec(hdr, sechdrs, secstrings, "__markers");
> > markersstringsindex = find_sec(hdr, sechdrs, secstrings,
> > "__markers_strings");
> > + tracepointsindex = find_sec(hdr, sechdrs, secstrings, "__tracepoints");
> > + tracepointsstringsindex = find_sec(hdr, sechdrs, secstrings,
> > + "__tracepoints_strings");
> >
> > /* Now do relocations. */
> > for (i = 1; i < hdr->e_shnum; i++) {
> > @@ -2137,6 +2143,12 @@ static struct module *load_module(void _
> > mod->num_markers =
> > sechdrs[markersindex].sh_size / sizeof(*mod->markers);
> > #endif
> > +#ifdef CONFIG_TRACEPOINTS
> > + mod->tracepoints = (void *)sechdrs[tracepointsindex].sh_addr;
> > + mod->num_tracepoints =
> > + sechdrs[tracepointsindex].sh_size / sizeof(*mod->tracepoints);
> > +#endif
> > +
> >
> > /* Find duplicate symbols */
> > err = verify_export_symbols(mod);
> > @@ -2155,11 +2167,16 @@ static struct module *load_module(void _
> >
> > add_kallsyms(mod, sechdrs, symindex, strindex, secstrings);
> >
> > + if (!mod->taints) {
> > #ifdef CONFIG_MARKERS
> > - if (!mod->taints)
> > marker_update_probe_range(mod->markers,
> > mod->markers + mod->num_markers);
> > #endif
> > +#ifdef CONFIG_TRACEPOINTS
> > + tracepoint_update_probe_range(mod->tracepoints,
> > + mod->tracepoints + mod->num_tracepoints);
> > +#endif
> > + }
> > err = module_finalize(hdr, sechdrs, mod);
> > if (err < 0)
> > goto cleanup;
> > @@ -2710,3 +2727,50 @@ void module_update_markers(void)
> > mutex_unlock(&module_mutex);
> > }
> > #endif
> > +
> > +#ifdef CONFIG_TRACEPOINTS
> > +void module_update_tracepoints(void)
> > +{
> > + struct module *mod;
> > +
> > + mutex_lock(&module_mutex);
> > + list_for_each_entry(mod, &modules, list)
> > + if (!mod->taints)
> > + tracepoint_update_probe_range(mod->tracepoints,
> > + mod->tracepoints + mod->num_tracepoints);
> > + mutex_unlock(&module_mutex);
> > +}
> > +
> > +/*
> > + * Returns 0 if current not found.
> > + * Returns 1 if current found.
> > + */
> > +int module_get_iter_tracepoints(struct tracepoint_iter *iter)
> > +{
> > + struct module *iter_mod;
> > + int found = 0;
> > +
> > + mutex_lock(&module_mutex);
> > + list_for_each_entry(iter_mod, &modules, list) {
> > + if (!iter_mod->taints) {
> > + /*
> > + * Sorted module list
> > + */
> > + if (iter_mod < iter->module)
> > + continue;
> > + else if (iter_mod > iter->module)
> > + iter->tracepoint = NULL;
> > + found = tracepoint_get_iter_range(&iter->tracepoint,
> > + iter_mod->tracepoints,
> > + iter_mod->tracepoints
> > + + iter_mod->num_tracepoints);
> > + if (found) {
> > + iter->module = iter_mod;
> > + break;
> > + }
> > + }
> > + }
> > + mutex_unlock(&module_mutex);
> > + return found;
> > +}
> > +#endif
> > Index: linux-2.6-lttng/include/linux/module.h
> > ===================================================================
> > --- linux-2.6-lttng.orig/include/linux/module.h 2008-07-09 10:55:46.000000000 -0400
> > +++ linux-2.6-lttng/include/linux/module.h 2008-07-09 10:57:22.000000000 -0400
> > @@ -16,6 +16,7 @@
> > #include <linux/kobject.h>
> > #include <linux/moduleparam.h>
> > #include <linux/marker.h>
> > +#include <linux/tracepoint.h>
> > #include <asm/local.h>
> >
> > #include <asm/module.h>
> > @@ -331,6 +332,10 @@ struct module
> > struct marker *markers;
> > unsigned int num_markers;
> > #endif
> > +#ifdef CONFIG_TRACEPOINTS
> > + struct tracepoint *tracepoints;
> > + unsigned int num_tracepoints;
> > +#endif
> >
> > #ifdef CONFIG_MODULE_UNLOAD
> > /* What modules depend on me? */
> > @@ -454,6 +459,9 @@ extern void print_modules(void);
> >
> > extern void module_update_markers(void);
> >
> > +extern void module_update_tracepoints(void);
> > +extern int module_get_iter_tracepoints(struct tracepoint_iter *iter);
> > +
> > #else /* !CONFIG_MODULES... */
> > #define EXPORT_SYMBOL(sym)
> > #define EXPORT_SYMBOL_GPL(sym)
> > @@ -558,6 +566,15 @@ static inline void module_update_markers
> > {
> > }
> >
> > +static inline void module_update_tracepoints(void)
> > +{
> > +}
> > +
> > +static inline int module_get_iter_tracepoints(struct tracepoint_iter *iter)
> > +{
> > + return 0;
> > +}
> > +
> > #endif /* CONFIG_MODULES */
> >
> > struct device_driver;
> >
>

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 13:59:20

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:

> > > +#define __DO_TRACE(tp, proto, args) \
> > > + do { \
> > > + int i; \
> > > + void **funcs; \
> > > + preempt_disable(); \
> > > + funcs = (tp)->funcs; \
> > > + smp_read_barrier_depends(); \
> > > + if (funcs) { \
> > > + for (i = 0; funcs[i]; i++) { \
> >
> > can't you get rid of 'i' and write:
> >
> > void **func;
> >
> > preempt_disable();
> > func = (tp)->funcs;
> > smp_read_barrier_depends();
> > for (; func; func++)
> > ((void (*)(proto))func)(args);
> > preempt_enable();
> >
>
> Yes, I though there would be an optimization to do here, I'll use your
> proposal. This code snippet is especially important since it will
> generate instructions near every tracepoint side. Saving a few bytes
> becomes important.
>
> Given that (tp)->funcs references an array of function pointers and that
> it can be NULL, the if (funcs) test must still be there and we must use
>
> #define __DO_TRACE(tp, proto, args) \
> do { \
> void *func; \
> \
> preempt_disable(); \
> if ((tp)->funcs) { \
> func = rcu_dereference((tp)->funcs); \
> for (; func; func++) { \
> ((void(*)(proto))(func))(args); \
> } \
> } \
> preempt_enable(); \
> } while (0)
>
>
> The resulting assembly is a bit more dense than my previous
> implementation, which is good :

My version also has that if ((tp)->funcs), but its hidden in the
for (; func; func++) loop. The only thing your version does is an extra
test of tp->funcs but without read depends barrier - not sure if that is
ok.


2008-07-15 14:03:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:

> > > +#define __DO_TRACE(tp, proto, args) \
> > > + do { \
> > > + int i; \
> > > + void **funcs; \
> > > + preempt_disable(); \
> > > + funcs = (tp)->funcs; \
> > > + smp_read_barrier_depends(); \
> > > + if (funcs) { \
> > > + for (i = 0; funcs[i]; i++) { \
> >
> > Also, why is the preempt_disable needed?
> >
>
> Addition and removal of tracepoints is synchronized by RCU using the
> scheduler (and preempt_disable) as guarantees to find a quiescent state
> (this is really RCU "classic"). The update side uses rcu_barrier_sched()
> with call_rcu_sched() and the read/execute side uses
> "preempt_disable()/preempt_enable()".

> > > +static void tracepoint_entry_free_old(struct tracepoint_entry *entry, void *old)
> > > +{
> > > + if (!old)
> > > + return;
> > > + entry->oldptr = old;
> > > + entry->rcu_pending = 1;
> > > + /* write rcu_pending before calling the RCU callback */
> > > + smp_wmb();
> > > +#ifdef CONFIG_PREEMPT_RCU
> > > + synchronize_sched(); /* Until we have the call_rcu_sched() */
> > > +#endif
> >
> > Does this have something to do with the preempt_disable above?
> >
>
> Yes, it does. We make sure the previous array containing probes, which
> has been scheduled for deletion by the rcu callback, is indeed freed
> before we proceed to the next update. It therefore limits the rate of
> modification of a single tracepoint to one update per RCU period. The
> objective here is to permit fast batch add/removal of probes on
> _different_ tracepoints.
>
> This use of "synchronize_sched()" can be changed for call_rcu_sched() in
> linux-next, I'll fix this.

Right, I thought as much, its just that the raw preempt_disable()
without comments leaves one wondering if there is anything else going
on.

Would it make sense to add:

rcu_read_sched_lock()
rcu_read_sched_unlock()

to match:

call_rcu_sched()
rcu_barrier_sched()
synchronize_sched()

?

2008-07-15 14:32:22

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
>
> > > > +#define __DO_TRACE(tp, proto, args) \
> > > > + do { \
> > > > + int i; \
> > > > + void **funcs; \
> > > > + preempt_disable(); \
> > > > + funcs = (tp)->funcs; \
> > > > + smp_read_barrier_depends(); \
> > > > + if (funcs) { \
> > > > + for (i = 0; funcs[i]; i++) { \
> > >
> > > can't you get rid of 'i' and write:
> > >
> > > void **func;
> > >
> > > preempt_disable();
> > > func = (tp)->funcs;
> > > smp_read_barrier_depends();
> > > for (; func; func++)
> > > ((void (*)(proto))func)(args);
> > > preempt_enable();
> > >
> >
> > Yes, I though there would be an optimization to do here, I'll use your
> > proposal. This code snippet is especially important since it will
> > generate instructions near every tracepoint side. Saving a few bytes
> > becomes important.
> >
> > Given that (tp)->funcs references an array of function pointers and that
> > it can be NULL, the if (funcs) test must still be there and we must use
> >
> > #define __DO_TRACE(tp, proto, args) \
> > do { \
> > void *func; \
> > \
> > preempt_disable(); \
> > if ((tp)->funcs) { \
> > func = rcu_dereference((tp)->funcs); \
> > for (; func; func++) { \
> > ((void(*)(proto))(func))(args); \
> > } \
> > } \
> > preempt_enable(); \
> > } while (0)
> >
> >
> > The resulting assembly is a bit more dense than my previous
> > implementation, which is good :
>
> My version also has that if ((tp)->funcs), but its hidden in the
> for (; func; func++) loop. The only thing your version does is an extra
> test of tp->funcs but without read depends barrier - not sure if that is
> ok.
>

Hrm, you are right, the implementation I just proposed is bogus. (but so
was yours) ;)

func is an iterator on the funcs array. My typing of func is thus wrong,
it should be void **. Otherwise I'm just incrementing the function
address which is plain wrong.

The read barrier is included in rcu_dereference() now. But given that we
have to take a pointer to the array as an iterator, we would have to
rcu_dereference() our iterator multiple times and then have many read
barrier depends, which we don't need. This is why I would go back to a
smp_read_barrier_depends().

Also, I use a NULL entry at the end of the funcs array as an end of
array identifier. However, I cannot use this in the for loop both as a
check for NULL array and check for NULL array element. This is why a if
() test is needed in addition to the for loop test. (this is actually
what is wrong in the implementation you proposed : you treat func both
as a pointer to the function pointer array and as a function pointer)

Something like this seems better :

#define __DO_TRACE(tp, proto, args) \
do { \
void **it_func; \
\
preempt_disable(); \
it_func = (tp)->funcs; \
if (it_func) { \
smp_read_barrier_depends(); \
for (; *it_func; it_func++) \
((void(*)(proto))(*it_func))(args); \
} \
preempt_enable(); \
} while (0)

What do you think ?

Mathieu

>
>

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 14:41:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 10:27 -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> > > * Peter Zijlstra ([email protected]) wrote:
> > > > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
> >
> > > > > +#define __DO_TRACE(tp, proto, args) \
> > > > > + do { \
> > > > > + int i; \
> > > > > + void **funcs; \
> > > > > + preempt_disable(); \
> > > > > + funcs = (tp)->funcs; \
> > > > > + smp_read_barrier_depends(); \
> > > > > + if (funcs) { \
> > > > > + for (i = 0; funcs[i]; i++) { \
> > > >
> > > > can't you get rid of 'i' and write:
> > > >
> > > > void **func;
> > > >
> > > > preempt_disable();
> > > > func = (tp)->funcs;
> > > > smp_read_barrier_depends();
> > > > for (; func; func++)
> > > > ((void (*)(proto))func)(args);
> > > > preempt_enable();
> > > >
> > >
> > > Yes, I though there would be an optimization to do here, I'll use your
> > > proposal. This code snippet is especially important since it will
> > > generate instructions near every tracepoint side. Saving a few bytes
> > > becomes important.
> > >
> > > Given that (tp)->funcs references an array of function pointers and that
> > > it can be NULL, the if (funcs) test must still be there and we must use
> > >
> > > #define __DO_TRACE(tp, proto, args) \
> > > do { \
> > > void *func; \
> > > \
> > > preempt_disable(); \
> > > if ((tp)->funcs) { \
> > > func = rcu_dereference((tp)->funcs); \
> > > for (; func; func++) { \
> > > ((void(*)(proto))(func))(args); \
> > > } \
> > > } \
> > > preempt_enable(); \
> > > } while (0)
> > >
> > >
> > > The resulting assembly is a bit more dense than my previous
> > > implementation, which is good :
> >
> > My version also has that if ((tp)->funcs), but its hidden in the
> > for (; func; func++) loop. The only thing your version does is an extra
> > test of tp->funcs but without read depends barrier - not sure if that is
> > ok.
> >
>
> Hrm, you are right, the implementation I just proposed is bogus. (but so
> was yours) ;)
>
> func is an iterator on the funcs array. My typing of func is thus wrong,
> it should be void **. Otherwise I'm just incrementing the function
> address which is plain wrong.
>
> The read barrier is included in rcu_dereference() now. But given that we
> have to take a pointer to the array as an iterator, we would have to
> rcu_dereference() our iterator multiple times and then have many read
> barrier depends, which we don't need. This is why I would go back to a
> smp_read_barrier_depends().
>
> Also, I use a NULL entry at the end of the funcs array as an end of
> array identifier. However, I cannot use this in the for loop both as a
> check for NULL array and check for NULL array element. This is why a if
> () test is needed in addition to the for loop test. (this is actually
> what is wrong in the implementation you proposed : you treat func both
> as a pointer to the function pointer array and as a function pointer)

Ah, D'0h! Indeed.

> Something like this seems better :
>
> #define __DO_TRACE(tp, proto, args) \
> do { \
> void **it_func; \
> \
> preempt_disable(); \
> it_func = (tp)->funcs; \
> if (it_func) { \
> smp_read_barrier_depends(); \
> for (; *it_func; it_func++) \
> ((void(*)(proto))(*it_func))(args); \
> } \
> preempt_enable(); \
> } while (0)
>
> What do you think ?

I'm confused by the barrier games here.

Why not:

void **it_func;

preempt_disable();
it_func = rcu_dereference((tp)->funcs);
if (it_func) {
for (; *it_func; it_func++)
((void(*)(proto))(*it_func))(args);
}
preempt_enable();

That is, why can we skip the barrier when !it_func? is that because at
that time we don't actually dereference it_func and therefore cannot
observe stale data?

If so, does this really matter since we're already in an unlikely
section? Again, if so, this deserves a comment ;-)

[ still think those preempt_* calls should be called
rcu_read_sched_lock() or such. ]

Anyway, does this still generate better code?

2008-07-15 14:46:42

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
>
> > > > +#define __DO_TRACE(tp, proto, args) \
> > > > + do { \
> > > > + int i; \
> > > > + void **funcs; \
> > > > + preempt_disable(); \
> > > > + funcs = (tp)->funcs; \
> > > > + smp_read_barrier_depends(); \
> > > > + if (funcs) { \
> > > > + for (i = 0; funcs[i]; i++) { \
> > >
> > > Also, why is the preempt_disable needed?
> > >
> >
> > Addition and removal of tracepoints is synchronized by RCU using the
> > scheduler (and preempt_disable) as guarantees to find a quiescent state
> > (this is really RCU "classic"). The update side uses rcu_barrier_sched()
> > with call_rcu_sched() and the read/execute side uses
> > "preempt_disable()/preempt_enable()".
>
> > > > +static void tracepoint_entry_free_old(struct tracepoint_entry *entry, void *old)
> > > > +{
> > > > + if (!old)
> > > > + return;
> > > > + entry->oldptr = old;
> > > > + entry->rcu_pending = 1;
> > > > + /* write rcu_pending before calling the RCU callback */
> > > > + smp_wmb();
> > > > +#ifdef CONFIG_PREEMPT_RCU
> > > > + synchronize_sched(); /* Until we have the call_rcu_sched() */
> > > > +#endif
> > >
> > > Does this have something to do with the preempt_disable above?
> > >
> >
> > Yes, it does. We make sure the previous array containing probes, which
> > has been scheduled for deletion by the rcu callback, is indeed freed
> > before we proceed to the next update. It therefore limits the rate of
> > modification of a single tracepoint to one update per RCU period. The
> > objective here is to permit fast batch add/removal of probes on
> > _different_ tracepoints.
> >
> > This use of "synchronize_sched()" can be changed for call_rcu_sched() in
> > linux-next, I'll fix this.
>
> Right, I thought as much, its just that the raw preempt_disable()
> without comments leaves one wondering if there is anything else going
> on.
>
> Would it make sense to add:
>
> rcu_read_sched_lock()
> rcu_read_sched_unlock()
>
> to match:
>
> call_rcu_sched()
> rcu_barrier_sched()
> synchronize_sched()
>
> ?
>

Yes, I would add them to include/linux/rcupdate.h. I'll include it with
my next release.

Talking about headers, I have noticed that placing headers with the code
may not be as clean as I would hope. For instance, the kernel/irq-trace.h
header, when included from kernel/irq/handle.c, has to be included with:

#include "../irq-trace.h"

Which is not _that_ bad, but we we want to instrument the irq handler
found in arch/x86/kernel/cpu/mcheck/mce_intel_64.c, including
#include "../../../../../kernel/irq-trace.h" makes me go "yeeeek!"

How about creating include/trace/irq.h and friends ?

Mathieu

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 15:14:00

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 10:46 -0400, Mathieu Desnoyers wrote:

> Talking about headers, I have noticed that placing headers with the code
> may not be as clean as I would hope. For instance, the kernel/irq-trace.h
> header, when included from kernel/irq/handle.c, has to be included with:
>
> #include "../irq-trace.h"
>
> Which is not _that_ bad, but we we want to instrument the irq handler
> found in arch/x86/kernel/cpu/mcheck/mce_intel_64.c, including
> #include "../../../../../kernel/irq-trace.h" makes me go "yeeeek!"
>
> How about creating include/trace/irq.h and friends ?

Might as well.. anybody else got opinions?

2008-07-15 15:22:38

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 10:27 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > > On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> > > > * Peter Zijlstra ([email protected]) wrote:
> > > > > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
> > >
> > > > > > +#define __DO_TRACE(tp, proto, args) \
> > > > > > + do { \
> > > > > > + int i; \
> > > > > > + void **funcs; \
> > > > > > + preempt_disable(); \
> > > > > > + funcs = (tp)->funcs; \
> > > > > > + smp_read_barrier_depends(); \
> > > > > > + if (funcs) { \
> > > > > > + for (i = 0; funcs[i]; i++) { \
> > > > >
> > > > > can't you get rid of 'i' and write:
> > > > >
> > > > > void **func;
> > > > >
> > > > > preempt_disable();
> > > > > func = (tp)->funcs;
> > > > > smp_read_barrier_depends();
> > > > > for (; func; func++)
> > > > > ((void (*)(proto))func)(args);
> > > > > preempt_enable();
> > > > >
> > > >
> > > > Yes, I though there would be an optimization to do here, I'll use your
> > > > proposal. This code snippet is especially important since it will
> > > > generate instructions near every tracepoint side. Saving a few bytes
> > > > becomes important.
> > > >
> > > > Given that (tp)->funcs references an array of function pointers and that
> > > > it can be NULL, the if (funcs) test must still be there and we must use
> > > >
> > > > #define __DO_TRACE(tp, proto, args) \
> > > > do { \
> > > > void *func; \
> > > > \
> > > > preempt_disable(); \
> > > > if ((tp)->funcs) { \
> > > > func = rcu_dereference((tp)->funcs); \
> > > > for (; func; func++) { \
> > > > ((void(*)(proto))(func))(args); \
> > > > } \
> > > > } \
> > > > preempt_enable(); \
> > > > } while (0)
> > > >
> > > >
> > > > The resulting assembly is a bit more dense than my previous
> > > > implementation, which is good :
> > >
> > > My version also has that if ((tp)->funcs), but its hidden in the
> > > for (; func; func++) loop. The only thing your version does is an extra
> > > test of tp->funcs but without read depends barrier - not sure if that is
> > > ok.
> > >
> >
> > Hrm, you are right, the implementation I just proposed is bogus. (but so
> > was yours) ;)
> >
> > func is an iterator on the funcs array. My typing of func is thus wrong,
> > it should be void **. Otherwise I'm just incrementing the function
> > address which is plain wrong.
> >
> > The read barrier is included in rcu_dereference() now. But given that we
> > have to take a pointer to the array as an iterator, we would have to
> > rcu_dereference() our iterator multiple times and then have many read
> > barrier depends, which we don't need. This is why I would go back to a
> > smp_read_barrier_depends().
> >
> > Also, I use a NULL entry at the end of the funcs array as an end of
> > array identifier. However, I cannot use this in the for loop both as a
> > check for NULL array and check for NULL array element. This is why a if
> > () test is needed in addition to the for loop test. (this is actually
> > what is wrong in the implementation you proposed : you treat func both
> > as a pointer to the function pointer array and as a function pointer)
>
> Ah, D'0h! Indeed.
>
> > Something like this seems better :
> >
> > #define __DO_TRACE(tp, proto, args) \
> > do { \
> > void **it_func; \
> > \
> > preempt_disable(); \
> > it_func = (tp)->funcs; \
> > if (it_func) { \
> > smp_read_barrier_depends(); \
> > for (; *it_func; it_func++) \
> > ((void(*)(proto))(*it_func))(args); \
> > } \
> > preempt_enable(); \
> > } while (0)
> >
> > What do you think ?
>
> I'm confused by the barrier games here.
>
> Why not:
>
> void **it_func;
>
> preempt_disable();
> it_func = rcu_dereference((tp)->funcs);
> if (it_func) {
> for (; *it_func; it_func++)
> ((void(*)(proto))(*it_func))(args);
> }
> preempt_enable();
>
> That is, why can we skip the barrier when !it_func? is that because at
> that time we don't actually dereference it_func and therefore cannot
> observe stale data?
>

Exactly. I used the implementation of rcu_assign_pointer as a hint that
we did not need barriers when setting the pointer to NULL, and thus we
should not need the read barrier when reading the NULL pointer, because
it references no data.

#define rcu_assign_pointer(p, v) \
({ \
if (!__builtin_constant_p(v) || \
((v) != NULL)) \
smp_wmb(); \
(p) = (v); \
})

#define rcu_dereference(p) ({ \
typeof(p) _________p1 = ACCESS_ONCE(p); \
smp_read_barrier_depends(); \
(_________p1); \
})

But I think you are right, since we are already in unlikely code, using
rcu_dereference as you do is better than my use of read barrier depends.
It should not change anything in the assembly result except on alpha,
where the read_barrier_depends() is not a nop.

I wonder if there would be a way to add this kind of NULL pointer case
check without overhead in rcu_dereference() on alpha. I guess not, since
the pointer is almost never known at compile-time. And I guess Paul must
already have thought about it. The only case where we could add this
test is when we know that we have a if (ptr != NULL) test following the
rcu_dereference(); we could then assume the compiler will merge the two
branches since they depend on the same condition.

> If so, does this really matter since we're already in an unlikely
> section? Again, if so, this deserves a comment ;-)
>
> [ still think those preempt_* calls should be called
> rcu_read_sched_lock() or such. ]
>
> Anyway, does this still generate better code?
>

On x86_64 :

820: bf 01 00 00 00 mov $0x1,%edi
825: e8 00 00 00 00 callq 82a <thread_return+0x136>
82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
831: 48 85 db test %rbx,%rbx
834: 75 21 jne 857 <thread_return+0x163>
836: eb 27 jmp 85f <thread_return+0x16b>
838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
83f: 00
840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
84e: 4c 89 e7 mov %r12,%rdi
851: 48 83 c3 08 add $0x8,%rbx
855: ff d0 callq *%rax
857: 48 8b 03 mov (%rbx),%rax
85a: 48 85 c0 test %rax,%rax
85d: 75 e1 jne 840 <thread_return+0x14c>
85f: bf 01 00 00 00 mov $0x1,%edi
864:

for 68 bytes.

My original implementation was 77 bytes, so yes, we have a win.

Mathieu


--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 15:32:08

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> >
> > I'm confused by the barrier games here.
> >
> > Why not:
> >
> > void **it_func;
> >
> > preempt_disable();
> > it_func = rcu_dereference((tp)->funcs);
> > if (it_func) {
> > for (; *it_func; it_func++)
> > ((void(*)(proto))(*it_func))(args);
> > }
> > preempt_enable();
> >
> > That is, why can we skip the barrier when !it_func? is that because at
> > that time we don't actually dereference it_func and therefore cannot
> > observe stale data?
> >
>
> Exactly. I used the implementation of rcu_assign_pointer as a hint that
> we did not need barriers when setting the pointer to NULL, and thus we
> should not need the read barrier when reading the NULL pointer, because
> it references no data.
>
> #define rcu_assign_pointer(p, v) \
> ({ \
> if (!__builtin_constant_p(v) || \
> ((v) != NULL)) \
> smp_wmb(); \
> (p) = (v); \
> })

Yeah, I saw that,.. made me wonder. It basically assumes that when we
write:

rcu_assign_pointer(foo, NULL);

foo will not be used as an index or offset.

I guess Paul has thought it through and verified all in-kernel use
cases, but it still makes me feel unconfortable.

> #define rcu_dereference(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> smp_read_barrier_depends(); \
> (_________p1); \
> })
>
> But I think you are right, since we are already in unlikely code, using
> rcu_dereference as you do is better than my use of read barrier depends.
> It should not change anything in the assembly result except on alpha,
> where the read_barrier_depends() is not a nop.
>
> I wonder if there would be a way to add this kind of NULL pointer case
> check without overhead in rcu_dereference() on alpha. I guess not, since
> the pointer is almost never known at compile-time. And I guess Paul must
> already have thought about it. The only case where we could add this
> test is when we know that we have a if (ptr != NULL) test following the
> rcu_dereference(); we could then assume the compiler will merge the two
> branches since they depend on the same condition.

I remember seeing a thread about all this special casing NULL, but have
never been able to find it again - my google skillz always fail me.

Basically it doesn't work if you use the variable as an index/offset,
because in that case 0 is a valid offset and you still generate a data
dependency.

IIRC the conclusion was that the gains were too small to spend more time
on it, although I would like to hear about the special case in
rcu_assign_pointer.

/me goes use git blame....

> > If so, does this really matter since we're already in an unlikely
> > section? Again, if so, this deserves a comment ;-)
> >
> > [ still think those preempt_* calls should be called
> > rcu_read_sched_lock() or such. ]
> >
> > Anyway, does this still generate better code?
> >
>
> On x86_64 :
>
> 820: bf 01 00 00 00 mov $0x1,%edi
> 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> 831: 48 85 db test %rbx,%rbx
> 834: 75 21 jne 857 <thread_return+0x163>
> 836: eb 27 jmp 85f <thread_return+0x16b>
> 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> 83f: 00
> 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> 84e: 4c 89 e7 mov %r12,%rdi
> 851: 48 83 c3 08 add $0x8,%rbx
> 855: ff d0 callq *%rax
> 857: 48 8b 03 mov (%rbx),%rax
> 85a: 48 85 c0 test %rax,%rax
> 85d: 75 e1 jne 840 <thread_return+0x14c>
> 85f: bf 01 00 00 00 mov $0x1,%edi
> 864:
>
> for 68 bytes.
>
> My original implementation was 77 bytes, so yes, we have a win.

Ah, good good ! :-)

2008-07-15 15:50:30

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > >
> > > I'm confused by the barrier games here.
> > >
> > > Why not:
> > >
> > > void **it_func;
> > >
> > > preempt_disable();
> > > it_func = rcu_dereference((tp)->funcs);
> > > if (it_func) {
> > > for (; *it_func; it_func++)
> > > ((void(*)(proto))(*it_func))(args);
> > > }
> > > preempt_enable();
> > >
> > > That is, why can we skip the barrier when !it_func? is that because at
> > > that time we don't actually dereference it_func and therefore cannot
> > > observe stale data?
> > >
> >
> > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > we did not need barriers when setting the pointer to NULL, and thus we
> > should not need the read barrier when reading the NULL pointer, because
> > it references no data.
> >
> > #define rcu_assign_pointer(p, v) \
> > ({ \
> > if (!__builtin_constant_p(v) || \
> > ((v) != NULL)) \
> > smp_wmb(); \
> > (p) = (v); \
> > })
>
> Yeah, I saw that,.. made me wonder. It basically assumes that when we
> write:
>
> rcu_assign_pointer(foo, NULL);
>
> foo will not be used as an index or offset.
>
> I guess Paul has thought it through and verified all in-kernel use
> cases, but it still makes me feel unconfortable.
>
> > #define rcu_dereference(p) ({ \
> > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > smp_read_barrier_depends(); \
> > (_________p1); \
> > })
> >
> > But I think you are right, since we are already in unlikely code, using
> > rcu_dereference as you do is better than my use of read barrier depends.
> > It should not change anything in the assembly result except on alpha,
> > where the read_barrier_depends() is not a nop.
> >
> > I wonder if there would be a way to add this kind of NULL pointer case
> > check without overhead in rcu_dereference() on alpha. I guess not, since
> > the pointer is almost never known at compile-time. And I guess Paul must
> > already have thought about it. The only case where we could add this
> > test is when we know that we have a if (ptr != NULL) test following the
> > rcu_dereference(); we could then assume the compiler will merge the two
> > branches since they depend on the same condition.
>
> I remember seeing a thread about all this special casing NULL, but have
> never been able to find it again - my google skillz always fail me.
>
> Basically it doesn't work if you use the variable as an index/offset,
> because in that case 0 is a valid offset and you still generate a data
> dependency.
>
> IIRC the conclusion was that the gains were too small to spend more time
> on it, although I would like to hear about the special case in
> rcu_assign_pointer.
>
> /me goes use git blame....
>

Seems to come from :

commit d99c4f6b13b3149bc83703ab1493beaeaaaf8a2d

which refers to this discussion :

http://www.mail-archive.com/[email protected]/msg54852.html

Mathieu


> > > If so, does this really matter since we're already in an unlikely
> > > section? Again, if so, this deserves a comment ;-)
> > >
> > > [ still think those preempt_* calls should be called
> > > rcu_read_sched_lock() or such. ]
> > >
> > > Anyway, does this still generate better code?
> > >
> >
> > On x86_64 :
> >
> > 820: bf 01 00 00 00 mov $0x1,%edi
> > 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> > 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> > 831: 48 85 db test %rbx,%rbx
> > 834: 75 21 jne 857 <thread_return+0x163>
> > 836: eb 27 jmp 85f <thread_return+0x16b>
> > 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> > 83f: 00
> > 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> > 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> > 84e: 4c 89 e7 mov %r12,%rdi
> > 851: 48 83 c3 08 add $0x8,%rbx
> > 855: ff d0 callq *%rax
> > 857: 48 8b 03 mov (%rbx),%rax
> > 85a: 48 85 c0 test %rax,%rax
> > 85d: 75 e1 jne 840 <thread_return+0x14c>
> > 85f: bf 01 00 00 00 mov $0x1,%edi
> > 864:
> >
> > for 68 bytes.
> >
> > My original implementation was 77 bytes, so yes, we have a win.
>
> Ah, good good ! :-)
>

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 16:08:43

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > >
> > > I'm confused by the barrier games here.
> > >
> > > Why not:
> > >
> > > void **it_func;
> > >
> > > preempt_disable();
> > > it_func = rcu_dereference((tp)->funcs);
> > > if (it_func) {
> > > for (; *it_func; it_func++)
> > > ((void(*)(proto))(*it_func))(args);
> > > }
> > > preempt_enable();
> > >
> > > That is, why can we skip the barrier when !it_func? is that because at
> > > that time we don't actually dereference it_func and therefore cannot
> > > observe stale data?
> > >
> >
> > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > we did not need barriers when setting the pointer to NULL, and thus we
> > should not need the read barrier when reading the NULL pointer, because
> > it references no data.
> >
> > #define rcu_assign_pointer(p, v) \
> > ({ \
> > if (!__builtin_constant_p(v) || \
> > ((v) != NULL)) \
> > smp_wmb(); \
> > (p) = (v); \
> > })
>
> Yeah, I saw that,.. made me wonder. It basically assumes that when we
> write:
>
> rcu_assign_pointer(foo, NULL);
>
> foo will not be used as an index or offset.
>
> I guess Paul has thought it through and verified all in-kernel use
> cases, but it still makes me feel unconfortable.
>
> > #define rcu_dereference(p) ({ \
> > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > smp_read_barrier_depends(); \
> > (_________p1); \
> > })
> >
> > But I think you are right, since we are already in unlikely code, using
> > rcu_dereference as you do is better than my use of read barrier depends.
> > It should not change anything in the assembly result except on alpha,
> > where the read_barrier_depends() is not a nop.
> >
> > I wonder if there would be a way to add this kind of NULL pointer case
> > check without overhead in rcu_dereference() on alpha. I guess not, since
> > the pointer is almost never known at compile-time. And I guess Paul must
> > already have thought about it. The only case where we could add this
> > test is when we know that we have a if (ptr != NULL) test following the
> > rcu_dereference(); we could then assume the compiler will merge the two
> > branches since they depend on the same condition.
>
> I remember seeing a thread about all this special casing NULL, but have
> never been able to find it again - my google skillz always fail me.
>
> Basically it doesn't work if you use the variable as an index/offset,
> because in that case 0 is a valid offset and you still generate a data
> dependency.
>
> IIRC the conclusion was that the gains were too small to spend more time
> on it, although I would like to hear about the special case in
> rcu_assign_pointer.
>
> /me goes use git blame....
>

Actually, we could probably do the following, which also adds an extra
coherency check about non-NULL pointer assumptions :

#ifdef CONFIG_RCU_DEBUG /* this would be new */
#define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
#else
#define DEBUG_RCU_BUG_ON(x)
#endif

#define rcu_dereference(p) ({ \
typeof(p) _________p1 = ACCESS_ONCE(p); \
if (p != NULL) \
smp_read_barrier_depends(); \
(_________p1); \
})

#define rcu_dereference_non_null(p) ({ \
typeof(p) _________p1 = ACCESS_ONCE(p); \
DEBUG_RCU_BUG_ON(p == NULL); \
smp_read_barrier_depends(); \
(_________p1); \
})

The use-case where rcu_dereference() would be used is when it is
followed by a null pointer check (grepping through the sources shows me
this is a very very common case). In rare cases, it is assumed that the
pointer is never NULL and it is used just after the rcu_dereference. It
those cases, the extra test could be saved on alpha by using
rcu_dereference_non_null(p), which would check the the pointer is indeed
never NULL under some debug kernel configuration.

Does it make sense ?

Mathieu

> > > If so, does this really matter since we're already in an unlikely
> > > section? Again, if so, this deserves a comment ;-)
> > >
> > > [ still think those preempt_* calls should be called
> > > rcu_read_sched_lock() or such. ]
> > >
> > > Anyway, does this still generate better code?
> > >
> >
> > On x86_64 :
> >
> > 820: bf 01 00 00 00 mov $0x1,%edi
> > 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> > 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> > 831: 48 85 db test %rbx,%rbx
> > 834: 75 21 jne 857 <thread_return+0x163>
> > 836: eb 27 jmp 85f <thread_return+0x16b>
> > 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> > 83f: 00
> > 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> > 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> > 84e: 4c 89 e7 mov %r12,%rdi
> > 851: 48 83 c3 08 add $0x8,%rbx
> > 855: ff d0 callq *%rax
> > 857: 48 8b 03 mov (%rbx),%rax
> > 85a: 48 85 c0 test %rax,%rax
> > 85d: 75 e1 jne 840 <thread_return+0x14c>
> > 85f: bf 01 00 00 00 mov $0x1,%edi
> > 864:
> >
> > for 68 bytes.
> >
> > My original implementation was 77 bytes, so yes, we have a win.
>
> Ah, good good ! :-)
>

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 16:25:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 12:08 -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > * Peter Zijlstra ([email protected]) wrote:
> > > >
> > > > I'm confused by the barrier games here.
> > > >
> > > > Why not:
> > > >
> > > > void **it_func;
> > > >
> > > > preempt_disable();
> > > > it_func = rcu_dereference((tp)->funcs);
> > > > if (it_func) {
> > > > for (; *it_func; it_func++)
> > > > ((void(*)(proto))(*it_func))(args);
> > > > }
> > > > preempt_enable();
> > > >
> > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > that time we don't actually dereference it_func and therefore cannot
> > > > observe stale data?
> > > >
> > >
> > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > we did not need barriers when setting the pointer to NULL, and thus we
> > > should not need the read barrier when reading the NULL pointer, because
> > > it references no data.
> > >
> > > #define rcu_assign_pointer(p, v) \
> > > ({ \
> > > if (!__builtin_constant_p(v) || \
> > > ((v) != NULL)) \
> > > smp_wmb(); \
> > > (p) = (v); \
> > > })
> >
> > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > write:
> >
> > rcu_assign_pointer(foo, NULL);
> >
> > foo will not be used as an index or offset.
> >
> > I guess Paul has thought it through and verified all in-kernel use
> > cases, but it still makes me feel unconfortable.
> >
> > > #define rcu_dereference(p) ({ \
> > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > smp_read_barrier_depends(); \
> > > (_________p1); \
> > > })
> > >
> > > But I think you are right, since we are already in unlikely code, using
> > > rcu_dereference as you do is better than my use of read barrier depends.
> > > It should not change anything in the assembly result except on alpha,
> > > where the read_barrier_depends() is not a nop.
> > >
> > > I wonder if there would be a way to add this kind of NULL pointer case
> > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > the pointer is almost never known at compile-time. And I guess Paul must
> > > already have thought about it. The only case where we could add this
> > > test is when we know that we have a if (ptr != NULL) test following the
> > > rcu_dereference(); we could then assume the compiler will merge the two
> > > branches since they depend on the same condition.
> >
> > I remember seeing a thread about all this special casing NULL, but have
> > never been able to find it again - my google skillz always fail me.
> >
> > Basically it doesn't work if you use the variable as an index/offset,
> > because in that case 0 is a valid offset and you still generate a data
> > dependency.
> >
> > IIRC the conclusion was that the gains were too small to spend more time
> > on it, although I would like to hear about the special case in
> > rcu_assign_pointer.
> >
> > /me goes use git blame....
> >
>
> Actually, we could probably do the following, which also adds an extra
> coherency check about non-NULL pointer assumptions :
>
> #ifdef CONFIG_RCU_DEBUG /* this would be new */
> #define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
> #else
> #define DEBUG_RCU_BUG_ON(x)
> #endif
>
> #define rcu_dereference(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> if (p != NULL) \
> smp_read_barrier_depends(); \
> (_________p1); \
> })
>
> #define rcu_dereference_non_null(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> DEBUG_RCU_BUG_ON(p == NULL); \
> smp_read_barrier_depends(); \
> (_________p1); \
> })
>
> The use-case where rcu_dereference() would be used is when it is
> followed by a null pointer check (grepping through the sources shows me
> this is a very very common case). In rare cases, it is assumed that the
> pointer is never NULL and it is used just after the rcu_dereference. It
> those cases, the extra test could be saved on alpha by using
> rcu_dereference_non_null(p), which would check the the pointer is indeed
> never NULL under some debug kernel configuration.
>
> Does it make sense ?

This would break the case where the dereferenced variable is used as an
index/offset where 0 is a valid value and still generates data
dependencies.

So if with your new version we do:

i = rcu_dereference(foo);
j = table[i];

which translates into:

i = ACCESS_ONCE(foo);
if (i)
smp_read_barrier_depends();
j = table[i];

which when i == 0, would fail to do the barrier and can thus cause j to
be a wrong value.

Sadly I'll have to defer to Paul to explain exactly how that can happen
- I always get my head in a horrible twist with this case.


2008-07-15 16:26:57

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Mathieu Desnoyers ([email protected]) wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > * Peter Zijlstra ([email protected]) wrote:
> > > >
> > > > I'm confused by the barrier games here.
> > > >
> > > > Why not:
> > > >
> > > > void **it_func;
> > > >
> > > > preempt_disable();
> > > > it_func = rcu_dereference((tp)->funcs);
> > > > if (it_func) {
> > > > for (; *it_func; it_func++)
> > > > ((void(*)(proto))(*it_func))(args);
> > > > }
> > > > preempt_enable();
> > > >
> > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > that time we don't actually dereference it_func and therefore cannot
> > > > observe stale data?
> > > >
> > >
> > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > we did not need barriers when setting the pointer to NULL, and thus we
> > > should not need the read barrier when reading the NULL pointer, because
> > > it references no data.
> > >
> > > #define rcu_assign_pointer(p, v) \
> > > ({ \
> > > if (!__builtin_constant_p(v) || \
> > > ((v) != NULL)) \
> > > smp_wmb(); \
> > > (p) = (v); \
> > > })
> >
> > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > write:
> >
> > rcu_assign_pointer(foo, NULL);
> >
> > foo will not be used as an index or offset.
> >
> > I guess Paul has thought it through and verified all in-kernel use
> > cases, but it still makes me feel unconfortable.
> >
> > > #define rcu_dereference(p) ({ \
> > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > smp_read_barrier_depends(); \
> > > (_________p1); \
> > > })
> > >
> > > But I think you are right, since we are already in unlikely code, using
> > > rcu_dereference as you do is better than my use of read barrier depends.
> > > It should not change anything in the assembly result except on alpha,
> > > where the read_barrier_depends() is not a nop.
> > >
> > > I wonder if there would be a way to add this kind of NULL pointer case
> > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > the pointer is almost never known at compile-time. And I guess Paul must
> > > already have thought about it. The only case where we could add this
> > > test is when we know that we have a if (ptr != NULL) test following the
> > > rcu_dereference(); we could then assume the compiler will merge the two
> > > branches since they depend on the same condition.
> >
> > I remember seeing a thread about all this special casing NULL, but have
> > never been able to find it again - my google skillz always fail me.
> >
> > Basically it doesn't work if you use the variable as an index/offset,
> > because in that case 0 is a valid offset and you still generate a data
> > dependency.
> >
> > IIRC the conclusion was that the gains were too small to spend more time
> > on it, although I would like to hear about the special case in
> > rcu_assign_pointer.
> >
> > /me goes use git blame....
> >
>
> Actually, we could probably do the following, which also adds an extra
> coherency check about non-NULL pointer assumptions :
>
> #ifdef CONFIG_RCU_DEBUG /* this would be new */
> #define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
> #else
> #define DEBUG_RCU_BUG_ON(x)
> #endif
>
> #define rcu_dereference(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> if (p != NULL) \

Actually this line should be :
if (_________p1 != NULL) \

> smp_read_barrier_depends(); \
> (_________p1); \
> })
>
> #define rcu_dereference_non_null(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> DEBUG_RCU_BUG_ON(p == NULL); \

And this one :
DEBUG_RCU_BUG_ON(_________p1 == NULL); \

Mathieu


> smp_read_barrier_depends(); \
> (_________p1); \
> })
>
> The use-case where rcu_dereference() would be used is when it is
> followed by a null pointer check (grepping through the sources shows me
> this is a very very common case). In rare cases, it is assumed that the
> pointer is never NULL and it is used just after the rcu_dereference. It
> those cases, the extra test could be saved on alpha by using
> rcu_dereference_non_null(p), which would check the the pointer is indeed
> never NULL under some debug kernel configuration.
>
> Does it make sense ?
>
> Mathieu
>
> > > > If so, does this really matter since we're already in an unlikely
> > > > section? Again, if so, this deserves a comment ;-)
> > > >
> > > > [ still think those preempt_* calls should be called
> > > > rcu_read_sched_lock() or such. ]
> > > >
> > > > Anyway, does this still generate better code?
> > > >
> > >
> > > On x86_64 :
> > >
> > > 820: bf 01 00 00 00 mov $0x1,%edi
> > > 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> > > 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> > > 831: 48 85 db test %rbx,%rbx
> > > 834: 75 21 jne 857 <thread_return+0x163>
> > > 836: eb 27 jmp 85f <thread_return+0x16b>
> > > 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> > > 83f: 00
> > > 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> > > 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> > > 84e: 4c 89 e7 mov %r12,%rdi
> > > 851: 48 83 c3 08 add $0x8,%rbx
> > > 855: ff d0 callq *%rax
> > > 857: 48 8b 03 mov (%rbx),%rax
> > > 85a: 48 85 c0 test %rax,%rax
> > > 85d: 75 e1 jne 840 <thread_return+0x14c>
> > > 85f: bf 01 00 00 00 mov $0x1,%edi
> > > 864:
> > >
> > > for 68 bytes.
> > >
> > > My original implementation was 77 bytes, so yes, we have a win.
> >
> > Ah, good good ! :-)
> >
>
> --
> Mathieu Desnoyers
> OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 16:51:34

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 12:08 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > > * Peter Zijlstra ([email protected]) wrote:
> > > > >
> > > > > I'm confused by the barrier games here.
> > > > >
> > > > > Why not:
> > > > >
> > > > > void **it_func;
> > > > >
> > > > > preempt_disable();
> > > > > it_func = rcu_dereference((tp)->funcs);
> > > > > if (it_func) {
> > > > > for (; *it_func; it_func++)
> > > > > ((void(*)(proto))(*it_func))(args);
> > > > > }
> > > > > preempt_enable();
> > > > >
> > > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > > that time we don't actually dereference it_func and therefore cannot
> > > > > observe stale data?
> > > > >
> > > >
> > > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > > we did not need barriers when setting the pointer to NULL, and thus we
> > > > should not need the read barrier when reading the NULL pointer, because
> > > > it references no data.
> > > >
> > > > #define rcu_assign_pointer(p, v) \
> > > > ({ \
> > > > if (!__builtin_constant_p(v) || \
> > > > ((v) != NULL)) \
> > > > smp_wmb(); \
> > > > (p) = (v); \
> > > > })
> > >
> > > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > > write:
> > >
> > > rcu_assign_pointer(foo, NULL);
> > >
> > > foo will not be used as an index or offset.
> > >
> > > I guess Paul has thought it through and verified all in-kernel use
> > > cases, but it still makes me feel unconfortable.
> > >
> > > > #define rcu_dereference(p) ({ \
> > > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > > smp_read_barrier_depends(); \
> > > > (_________p1); \
> > > > })
> > > >
> > > > But I think you are right, since we are already in unlikely code, using
> > > > rcu_dereference as you do is better than my use of read barrier depends.
> > > > It should not change anything in the assembly result except on alpha,
> > > > where the read_barrier_depends() is not a nop.
> > > >
> > > > I wonder if there would be a way to add this kind of NULL pointer case
> > > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > > the pointer is almost never known at compile-time. And I guess Paul must
> > > > already have thought about it. The only case where we could add this
> > > > test is when we know that we have a if (ptr != NULL) test following the
> > > > rcu_dereference(); we could then assume the compiler will merge the two
> > > > branches since they depend on the same condition.
> > >
> > > I remember seeing a thread about all this special casing NULL, but have
> > > never been able to find it again - my google skillz always fail me.
> > >
> > > Basically it doesn't work if you use the variable as an index/offset,
> > > because in that case 0 is a valid offset and you still generate a data
> > > dependency.
> > >
> > > IIRC the conclusion was that the gains were too small to spend more time
> > > on it, although I would like to hear about the special case in
> > > rcu_assign_pointer.
> > >
> > > /me goes use git blame....
> > >
> >
> > Actually, we could probably do the following, which also adds an extra
> > coherency check about non-NULL pointer assumptions :
> >
> > #ifdef CONFIG_RCU_DEBUG /* this would be new */
> > #define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
> > #else
> > #define DEBUG_RCU_BUG_ON(x)
> > #endif
> >
> > #define rcu_dereference(p) ({ \
> > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > if (p != NULL) \
> > smp_read_barrier_depends(); \
> > (_________p1); \
> > })
> >
> > #define rcu_dereference_non_null(p) ({ \
> > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > DEBUG_RCU_BUG_ON(p == NULL); \
> > smp_read_barrier_depends(); \
> > (_________p1); \
> > })
> >
> > The use-case where rcu_dereference() would be used is when it is
> > followed by a null pointer check (grepping through the sources shows me
> > this is a very very common case). In rare cases, it is assumed that the
> > pointer is never NULL and it is used just after the rcu_dereference. It
> > those cases, the extra test could be saved on alpha by using
> > rcu_dereference_non_null(p), which would check the the pointer is indeed
> > never NULL under some debug kernel configuration.
> >
> > Does it make sense ?
>
> This would break the case where the dereferenced variable is used as an
> index/offset where 0 is a valid value and still generates data
> dependencies.
>
> So if with your new version we do:
>
> i = rcu_dereference(foo);
> j = table[i];
>
> which translates into:
>
> i = ACCESS_ONCE(foo);
> if (i)
> smp_read_barrier_depends();
> j = table[i];
>
> which when i == 0, would fail to do the barrier and can thus cause j to
> be a wrong value.
>
> Sadly I'll have to defer to Paul to explain exactly how that can happen
> - I always get my head in a horrible twist with this case.
>

I completely agree with you. However, given the current
rcu_assign_pointer() implementation, we already have this problem. My
proposal assumes the current rcu_assign_pointer() behavior is correct
and that those are never ever used for index/offsets.

We could enforce this as a compile-time check with something along the
lines of :

#define BUILD_BUG_ON_NOT_OFFSETABLE(x) (void)(x)[0]

And use it both in rcu_assign_pointer() and rcu_dereference(). It would
check for any type passed to rcu_assign_pointer and rcu_dereference
which is not either a pointer or an array.

Then if someone really want to shoot himself in the foot by casting a
pointer to a long after the rcu_deref, that's his problem.

Hrm, looking at rcu_assign_pointer tells me that the ((v) != NULL) test
should probably already complain if v is not a pointer. So my build test
is probably unneeded.

Mathieu

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 17:50:36

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> > > Anyway, does this still generate better code?
> > >
> >
> > On x86_64 :
> >
> > 820: bf 01 00 00 00 mov $0x1,%edi
> > 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> > 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> > 831: 48 85 db test %rbx,%rbx
> > 834: 75 21 jne 857 <thread_return+0x163>
> > 836: eb 27 jmp 85f <thread_return+0x16b>
> > 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> > 83f: 00
> > 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> > 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> > 84e: 4c 89 e7 mov %r12,%rdi
> > 851: 48 83 c3 08 add $0x8,%rbx
> > 855: ff d0 callq *%rax
> > 857: 48 8b 03 mov (%rbx),%rax
> > 85a: 48 85 c0 test %rax,%rax
> > 85d: 75 e1 jne 840 <thread_return+0x14c>
> > 85f: bf 01 00 00 00 mov $0x1,%edi
> > 864:
> >
> > for 68 bytes.
> >
> > My original implementation was 77 bytes, so yes, we have a win.
>
> Ah, good good ! :-)
>

For the same number of instruction bytes, here is yet another improvement. I
removed the it_func[0] NULL test case, which is impossible. We never
have an empty array. If the array is empty, the array pointer is set to
NULL and the array is eventually freed when a quiescent state is reached.

/*
* it_func[0] is never NULL because there is at least one element in the array
* when the array itself is non NULL.
*/
#define __DO_TRACE(tp, proto, args) \
do { \
void **it_func; \
\
preempt_disable(); \
it_func = rcu_dereference((tp)->funcs); \
if (it_func) { \
do { \
((void(*)(proto))(*it_func))(args); \
} while (*(++it_func)); \
} \
preempt_enable(); \
} while (0)

P.S.: I'll change the preempt_enable/disable for rcu locks when I port
this patchset to linux.next. I temporarily keep the preempt
disable/enable statements.

Mathieu

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 18:22:22

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 10:46 -0400, Mathieu Desnoyers wrote:
>
> > Talking about headers, I have noticed that placing headers with the code
> > may not be as clean as I would hope. For instance, the kernel/irq-trace.h
> > header, when included from kernel/irq/handle.c, has to be included with:
> >
> > #include "../irq-trace.h"
> >
> > Which is not _that_ bad, but we we want to instrument the irq handler
> > found in arch/x86/kernel/cpu/mcheck/mce_intel_64.c, including
> > #include "../../../../../kernel/irq-trace.h" makes me go "yeeeek!"
> >
> > How about creating include/trace/irq.h and friends ?
>
> Might as well.. anybody else got opinions?
>

I'm also wondering if it's better to have :

filemap.h
fs.h
hugetlb.h
ipc.h
ipv4.h
ipv6.h
irq.h
kernel.h
memory.h
net.h
page.h
sched.h
swap.h
timer.h

all in include/trace/ or to create subdirectories first, like :

include/trace/net/
include/trace/mm/
...

or to go the other way around and re-use the existing subdirectories :

include/net/trace/
include/mm/trace/
...

?

Mathieu



--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 18:33:50

by Steven Rostedt

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints


On Tue, 15 Jul 2008, Mathieu Desnoyers wrote:
>
> I'm also wondering if it's better to have :
>
> filemap.h
> fs.h
> hugetlb.h
> ipc.h
> ipv4.h
> ipv6.h
> irq.h
> kernel.h
> memory.h
> net.h
> page.h
> sched.h
> swap.h
> timer.h

This might be a better idea.

>
> all in include/trace/ or to create subdirectories first, like :
>
> include/trace/net/
> include/trace/mm/

I think that is too much. A single trace directory should be sufficent.

> ....
>
> or to go the other way around and re-use the existing subdirectories :
>
> include/net/trace/
> include/mm/trace/
> ....

I'm definitely against that.

-- Steve

2008-07-15 18:55:47

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

Hi,

Peter Zijlstra wrote:
> On Tue, 2008-07-15 at 10:46 -0400, Mathieu Desnoyers wrote:
>
>> Talking about headers, I have noticed that placing headers with the code
>> may not be as clean as I would hope. For instance, the kernel/irq-trace.h
>> header, when included from kernel/irq/handle.c, has to be included with:
>>
>> #include "../irq-trace.h"
>>
>> Which is not _that_ bad, but we we want to instrument the irq handler
>> found in arch/x86/kernel/cpu/mcheck/mce_intel_64.c, including
>> #include "../../../../../kernel/irq-trace.h" makes me go "yeeeek!"
>>
>> How about creating include/trace/irq.h and friends ?
>
> Might as well.. anybody else got opinions?

I just wonder why DEFINE_TRACE are used in separated headers
instead of include/linux/irq.h directly.

anyway, #include <trace/XXX.h> is good to me.

--
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: [email protected]

2008-07-15 19:02:28

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Peter Zijlstra ([email protected]) wrote:
> On Tue, 2008-07-15 at 09:25 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > > On Wed, 2008-07-09 at 10:59 -0400, Mathieu Desnoyers wrote:
>
> > > > +#define __DO_TRACE(tp, proto, args) \
> > > > + do { \
> > > > + int i; \
> > > > + void **funcs; \
> > > > + preempt_disable(); \
> > > > + funcs = (tp)->funcs; \
> > > > + smp_read_barrier_depends(); \
> > > > + if (funcs) { \
> > > > + for (i = 0; funcs[i]; i++) { \
> > >
> > > Also, why is the preempt_disable needed?
> > >
> >
> > Addition and removal of tracepoints is synchronized by RCU using the
> > scheduler (and preempt_disable) as guarantees to find a quiescent state
> > (this is really RCU "classic"). The update side uses rcu_barrier_sched()
> > with call_rcu_sched() and the read/execute side uses
> > "preempt_disable()/preempt_enable()".
>
> > > > +static void tracepoint_entry_free_old(struct tracepoint_entry *entry, void *old)
> > > > +{
> > > > + if (!old)
> > > > + return;
> > > > + entry->oldptr = old;
> > > > + entry->rcu_pending = 1;
> > > > + /* write rcu_pending before calling the RCU callback */
> > > > + smp_wmb();
> > > > +#ifdef CONFIG_PREEMPT_RCU
> > > > + synchronize_sched(); /* Until we have the call_rcu_sched() */
> > > > +#endif
> > >
> > > Does this have something to do with the preempt_disable above?
> > >
> >
> > Yes, it does. We make sure the previous array containing probes, which
> > has been scheduled for deletion by the rcu callback, is indeed freed
> > before we proceed to the next update. It therefore limits the rate of
> > modification of a single tracepoint to one update per RCU period. The
> > objective here is to permit fast batch add/removal of probes on
> > _different_ tracepoints.
> >
> > This use of "synchronize_sched()" can be changed for call_rcu_sched() in
> > linux-next, I'll fix this.
>
> Right, I thought as much, its just that the raw preempt_disable()
> without comments leaves one wondering if there is anything else going
> on.
>
> Would it make sense to add:
>
> rcu_read_sched_lock()
> rcu_read_sched_unlock()
>
> to match:
>
> call_rcu_sched()
> rcu_barrier_sched()
> synchronize_sched()
>
> ?
>

Actually I think it's better to call them
rcu_read_lock_sched() and rcu_read_unlock_sched() to match the _bh()
equivalent already in rcupdate.h.

Mathieu


--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 19:13:34

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

* Masami Hiramatsu ([email protected]) wrote:
> Hi,
>
> Peter Zijlstra wrote:
> > On Tue, 2008-07-15 at 10:46 -0400, Mathieu Desnoyers wrote:
> >
> >> Talking about headers, I have noticed that placing headers with the code
> >> may not be as clean as I would hope. For instance, the kernel/irq-trace.h
> >> header, when included from kernel/irq/handle.c, has to be included with:
> >>
> >> #include "../irq-trace.h"
> >>
> >> Which is not _that_ bad, but we we want to instrument the irq handler
> >> found in arch/x86/kernel/cpu/mcheck/mce_intel_64.c, including
> >> #include "../../../../../kernel/irq-trace.h" makes me go "yeeeek!"
> >>
> >> How about creating include/trace/irq.h and friends ?
> >
> > Might as well.. anybody else got opinions?
>
> I just wonder why DEFINE_TRACE are used in separated headers
> instead of include/linux/irq.h directly.
>
> anyway, #include <trace/XXX.h> is good to me.
>

Having these headers all placed nicely together will make it easier for
people who are looking for already existing tracepoints to locate them.

It's also worth noting that I am considering deploying a standard set of
tracepoints for userspace in a relatively short time frame. e.g. having
the ability to add tracepoints to pthread mutexes seems like an
interesting thing to have. And that will definitely require those
headers to sit somewhere around /usr/include/trace/ or something
similar, otherwise trying to locate those tracepoints will be hellish.

Mathieu

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-07-15 19:52:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, 2008-07-15 at 15:02 -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:

> > Would it make sense to add:
> >
> > rcu_read_sched_lock()
> > rcu_read_sched_unlock()
> >
> > to match:
> >
> > call_rcu_sched()
> > rcu_barrier_sched()
> > synchronize_sched()
> >
> > ?
> >
>
> Actually I think it's better to call them
> rcu_read_lock_sched() and rcu_read_unlock_sched() to match the _bh()
> equivalent already in rcupdate.h.

Sure,.

2008-08-01 21:14:58

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, Jul 15, 2008 at 12:08:13PM -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > * Peter Zijlstra ([email protected]) wrote:
> > > >
> > > > I'm confused by the barrier games here.
> > > >
> > > > Why not:
> > > >
> > > > void **it_func;
> > > >
> > > > preempt_disable();
> > > > it_func = rcu_dereference((tp)->funcs);
> > > > if (it_func) {
> > > > for (; *it_func; it_func++)
> > > > ((void(*)(proto))(*it_func))(args);
> > > > }
> > > > preempt_enable();
> > > >
> > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > that time we don't actually dereference it_func and therefore cannot
> > > > observe stale data?
> > > >
> > >
> > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > we did not need barriers when setting the pointer to NULL, and thus we
> > > should not need the read barrier when reading the NULL pointer, because
> > > it references no data.
> > >
> > > #define rcu_assign_pointer(p, v) \
> > > ({ \
> > > if (!__builtin_constant_p(v) || \
> > > ((v) != NULL)) \
> > > smp_wmb(); \
> > > (p) = (v); \
> > > })
> >
> > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > write:
> >
> > rcu_assign_pointer(foo, NULL);
> >
> > foo will not be used as an index or offset.
> >
> > I guess Paul has thought it through and verified all in-kernel use
> > cases, but it still makes me feel unconfortable.
> >
> > > #define rcu_dereference(p) ({ \
> > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > smp_read_barrier_depends(); \
> > > (_________p1); \
> > > })
> > >
> > > But I think you are right, since we are already in unlikely code, using
> > > rcu_dereference as you do is better than my use of read barrier depends.
> > > It should not change anything in the assembly result except on alpha,
> > > where the read_barrier_depends() is not a nop.
> > >
> > > I wonder if there would be a way to add this kind of NULL pointer case
> > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > the pointer is almost never known at compile-time. And I guess Paul must
> > > already have thought about it. The only case where we could add this
> > > test is when we know that we have a if (ptr != NULL) test following the
> > > rcu_dereference(); we could then assume the compiler will merge the two
> > > branches since they depend on the same condition.
> >
> > I remember seeing a thread about all this special casing NULL, but have
> > never been able to find it again - my google skillz always fail me.
> >
> > Basically it doesn't work if you use the variable as an index/offset,
> > because in that case 0 is a valid offset and you still generate a data
> > dependency.
> >
> > IIRC the conclusion was that the gains were too small to spend more time
> > on it, although I would like to hear about the special case in
> > rcu_assign_pointer.
> >
> > /me goes use git blame....
> >
>
> Actually, we could probably do the following, which also adds an extra
> coherency check about non-NULL pointer assumptions :
>
> #ifdef CONFIG_RCU_DEBUG /* this would be new */
> #define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
> #else
> #define DEBUG_RCU_BUG_ON(x)
> #endif
>
> #define rcu_dereference(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> if (p != NULL) \
> smp_read_barrier_depends(); \
> (_________p1); \
> })
>
> #define rcu_dereference_non_null(p) ({ \
> typeof(p) _________p1 = ACCESS_ONCE(p); \
> DEBUG_RCU_BUG_ON(p == NULL); \
> smp_read_barrier_depends(); \
> (_________p1); \
> })

The big question is "why"? smp_read_barrier_depends() is pretty
lightweight, after all.

Thanx, Paul

> The use-case where rcu_dereference() would be used is when it is
> followed by a null pointer check (grepping through the sources shows me
> this is a very very common case). In rare cases, it is assumed that the
> pointer is never NULL and it is used just after the rcu_dereference. It
> those cases, the extra test could be saved on alpha by using
> rcu_dereference_non_null(p), which would check the the pointer is indeed
> never NULL under some debug kernel configuration.
>
> Does it make sense ?
>
> Mathieu
>
> > > > If so, does this really matter since we're already in an unlikely
> > > > section? Again, if so, this deserves a comment ;-)
> > > >
> > > > [ still think those preempt_* calls should be called
> > > > rcu_read_sched_lock() or such. ]
> > > >
> > > > Anyway, does this still generate better code?
> > > >
> > >
> > > On x86_64 :
> > >
> > > 820: bf 01 00 00 00 mov $0x1,%edi
> > > 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> > > 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> > > 831: 48 85 db test %rbx,%rbx
> > > 834: 75 21 jne 857 <thread_return+0x163>
> > > 836: eb 27 jmp 85f <thread_return+0x16b>
> > > 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> > > 83f: 00
> > > 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> > > 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> > > 84e: 4c 89 e7 mov %r12,%rdi
> > > 851: 48 83 c3 08 add $0x8,%rbx
> > > 855: ff d0 callq *%rax
> > > 857: 48 8b 03 mov (%rbx),%rax
> > > 85a: 48 85 c0 test %rax,%rax
> > > 85d: 75 e1 jne 840 <thread_return+0x14c>
> > > 85f: bf 01 00 00 00 mov $0x1,%edi
> > > 864:
> > >
> > > for 68 bytes.
> > >
> > > My original implementation was 77 bytes, so yes, we have a win.
> >
> > Ah, good good ! :-)
> >
>
> --
> Mathieu Desnoyers
> OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-08-01 21:14:42

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, Jul 15, 2008 at 11:50:18AM -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > * Peter Zijlstra ([email protected]) wrote:
> > > >
> > > > I'm confused by the barrier games here.
> > > >
> > > > Why not:
> > > >
> > > > void **it_func;
> > > >
> > > > preempt_disable();
> > > > it_func = rcu_dereference((tp)->funcs);
> > > > if (it_func) {
> > > > for (; *it_func; it_func++)
> > > > ((void(*)(proto))(*it_func))(args);
> > > > }
> > > > preempt_enable();
> > > >
> > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > that time we don't actually dereference it_func and therefore cannot
> > > > observe stale data?
> > > >
> > >
> > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > we did not need barriers when setting the pointer to NULL, and thus we
> > > should not need the read barrier when reading the NULL pointer, because
> > > it references no data.
> > >
> > > #define rcu_assign_pointer(p, v) \
> > > ({ \
> > > if (!__builtin_constant_p(v) || \
> > > ((v) != NULL)) \
> > > smp_wmb(); \
> > > (p) = (v); \
> > > })
> >
> > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > write:
> >
> > rcu_assign_pointer(foo, NULL);
> >
> > foo will not be used as an index or offset.
> >
> > I guess Paul has thought it through and verified all in-kernel use
> > cases, but it still makes me feel unconfortable.

The idea was to create an rcu_assign_index() for that case, if and when
it arose, something like the following:

#define rcu_assign_index(p, v, a) \
({ \
smp_wmb(); \
(p) = (v); \
})

Thanx, Paul

> > > #define rcu_dereference(p) ({ \
> > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > smp_read_barrier_depends(); \
> > > (_________p1); \
> > > })
> > >
> > > But I think you are right, since we are already in unlikely code, using
> > > rcu_dereference as you do is better than my use of read barrier depends.
> > > It should not change anything in the assembly result except on alpha,
> > > where the read_barrier_depends() is not a nop.
> > >
> > > I wonder if there would be a way to add this kind of NULL pointer case
> > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > the pointer is almost never known at compile-time. And I guess Paul must
> > > already have thought about it. The only case where we could add this
> > > test is when we know that we have a if (ptr != NULL) test following the
> > > rcu_dereference(); we could then assume the compiler will merge the two
> > > branches since they depend on the same condition.
> >
> > I remember seeing a thread about all this special casing NULL, but have
> > never been able to find it again - my google skillz always fail me.
> >
> > Basically it doesn't work if you use the variable as an index/offset,
> > because in that case 0 is a valid offset and you still generate a data
> > dependency.
> >
> > IIRC the conclusion was that the gains were too small to spend more time
> > on it, although I would like to hear about the special case in
> > rcu_assign_pointer.
> >
> > /me goes use git blame....
> >
>
> Seems to come from :
>
> commit d99c4f6b13b3149bc83703ab1493beaeaaaf8a2d
>
> which refers to this discussion :
>
> http://www.mail-archive.com/[email protected]/msg54852.html
>
> Mathieu
>
>
> > > > If so, does this really matter since we're already in an unlikely
> > > > section? Again, if so, this deserves a comment ;-)
> > > >
> > > > [ still think those preempt_* calls should be called
> > > > rcu_read_sched_lock() or such. ]
> > > >
> > > > Anyway, does this still generate better code?
> > > >
> > >
> > > On x86_64 :
> > >
> > > 820: bf 01 00 00 00 mov $0x1,%edi
> > > 825: e8 00 00 00 00 callq 82a <thread_return+0x136>
> > > 82a: 48 8b 1d 00 00 00 00 mov 0x0(%rip),%rbx # 831 <thread_return+0x13d>
> > > 831: 48 85 db test %rbx,%rbx
> > > 834: 75 21 jne 857 <thread_return+0x163>
> > > 836: eb 27 jmp 85f <thread_return+0x16b>
> > > 838: 0f 1f 84 00 00 00 00 nopl 0x0(%rax,%rax,1)
> > > 83f: 00
> > > 840: 48 8b 95 68 ff ff ff mov -0x98(%rbp),%rdx
> > > 847: 48 8b b5 60 ff ff ff mov -0xa0(%rbp),%rsi
> > > 84e: 4c 89 e7 mov %r12,%rdi
> > > 851: 48 83 c3 08 add $0x8,%rbx
> > > 855: ff d0 callq *%rax
> > > 857: 48 8b 03 mov (%rbx),%rax
> > > 85a: 48 85 c0 test %rax,%rax
> > > 85d: 75 e1 jne 840 <thread_return+0x14c>
> > > 85f: bf 01 00 00 00 mov $0x1,%edi
> > > 864:
> > >
> > > for 68 bytes.
> > >
> > > My original implementation was 77 bytes, so yes, we have a win.
> >
> > Ah, good good ! :-)
> >
>
> --
> Mathieu Desnoyers
> OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68

2008-08-01 21:15:35

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, Jul 15, 2008 at 06:25:49PM +0200, Peter Zijlstra wrote:
> On Tue, 2008-07-15 at 12:08 -0400, Mathieu Desnoyers wrote:
> > * Peter Zijlstra ([email protected]) wrote:
> > > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > > * Peter Zijlstra ([email protected]) wrote:
> > > > >
> > > > > I'm confused by the barrier games here.
> > > > >
> > > > > Why not:
> > > > >
> > > > > void **it_func;
> > > > >
> > > > > preempt_disable();
> > > > > it_func = rcu_dereference((tp)->funcs);
> > > > > if (it_func) {
> > > > > for (; *it_func; it_func++)
> > > > > ((void(*)(proto))(*it_func))(args);
> > > > > }
> > > > > preempt_enable();
> > > > >
> > > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > > that time we don't actually dereference it_func and therefore cannot
> > > > > observe stale data?
> > > > >
> > > >
> > > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > > we did not need barriers when setting the pointer to NULL, and thus we
> > > > should not need the read barrier when reading the NULL pointer, because
> > > > it references no data.
> > > >
> > > > #define rcu_assign_pointer(p, v) \
> > > > ({ \
> > > > if (!__builtin_constant_p(v) || \
> > > > ((v) != NULL)) \
> > > > smp_wmb(); \
> > > > (p) = (v); \
> > > > })
> > >
> > > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > > write:
> > >
> > > rcu_assign_pointer(foo, NULL);
> > >
> > > foo will not be used as an index or offset.
> > >
> > > I guess Paul has thought it through and verified all in-kernel use
> > > cases, but it still makes me feel unconfortable.
> > >
> > > > #define rcu_dereference(p) ({ \
> > > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > > smp_read_barrier_depends(); \
> > > > (_________p1); \
> > > > })
> > > >
> > > > But I think you are right, since we are already in unlikely code, using
> > > > rcu_dereference as you do is better than my use of read barrier depends.
> > > > It should not change anything in the assembly result except on alpha,
> > > > where the read_barrier_depends() is not a nop.
> > > >
> > > > I wonder if there would be a way to add this kind of NULL pointer case
> > > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > > the pointer is almost never known at compile-time. And I guess Paul must
> > > > already have thought about it. The only case where we could add this
> > > > test is when we know that we have a if (ptr != NULL) test following the
> > > > rcu_dereference(); we could then assume the compiler will merge the two
> > > > branches since they depend on the same condition.
> > >
> > > I remember seeing a thread about all this special casing NULL, but have
> > > never been able to find it again - my google skillz always fail me.
> > >
> > > Basically it doesn't work if you use the variable as an index/offset,
> > > because in that case 0 is a valid offset and you still generate a data
> > > dependency.
> > >
> > > IIRC the conclusion was that the gains were too small to spend more time
> > > on it, although I would like to hear about the special case in
> > > rcu_assign_pointer.
> > >
> > > /me goes use git blame....
> > >
> >
> > Actually, we could probably do the following, which also adds an extra
> > coherency check about non-NULL pointer assumptions :
> >
> > #ifdef CONFIG_RCU_DEBUG /* this would be new */
> > #define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
> > #else
> > #define DEBUG_RCU_BUG_ON(x)
> > #endif
> >
> > #define rcu_dereference(p) ({ \
> > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > if (p != NULL) \
> > smp_read_barrier_depends(); \
> > (_________p1); \
> > })
> >
> > #define rcu_dereference_non_null(p) ({ \
> > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > DEBUG_RCU_BUG_ON(p == NULL); \
> > smp_read_barrier_depends(); \
> > (_________p1); \
> > })
> >
> > The use-case where rcu_dereference() would be used is when it is
> > followed by a null pointer check (grepping through the sources shows me
> > this is a very very common case). In rare cases, it is assumed that the
> > pointer is never NULL and it is used just after the rcu_dereference. It
> > those cases, the extra test could be saved on alpha by using
> > rcu_dereference_non_null(p), which would check the the pointer is indeed
> > never NULL under some debug kernel configuration.
> >
> > Does it make sense ?
>
> This would break the case where the dereferenced variable is used as an
> index/offset where 0 is a valid value and still generates data
> dependencies.
>
> So if with your new version we do:
>
> i = rcu_dereference(foo);
> j = table[i];
>
> which translates into:
>
> i = ACCESS_ONCE(foo);
> if (i)
> smp_read_barrier_depends();
> j = table[i];
>
> which when i == 0, would fail to do the barrier and can thus cause j to
> be a wrong value.
>
> Sadly I'll have to defer to Paul to explain exactly how that can happen
> - I always get my head in a horrible twist with this case.

Does http://lkml.org/lkml/2008/2/2/255 help? (Hmmm... I was intending
to do a more formal write up of this, but clearly haven't gotten to it...)

Thanx, Paul

2008-08-01 21:15:56

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Tue, Jul 15, 2008 at 12:51:23PM -0400, Mathieu Desnoyers wrote:
> * Peter Zijlstra ([email protected]) wrote:
> > On Tue, 2008-07-15 at 12:08 -0400, Mathieu Desnoyers wrote:
> > > * Peter Zijlstra ([email protected]) wrote:
> > > > On Tue, 2008-07-15 at 11:22 -0400, Mathieu Desnoyers wrote:
> > > > > * Peter Zijlstra ([email protected]) wrote:
> > > > > >
> > > > > > I'm confused by the barrier games here.
> > > > > >
> > > > > > Why not:
> > > > > >
> > > > > > void **it_func;
> > > > > >
> > > > > > preempt_disable();
> > > > > > it_func = rcu_dereference((tp)->funcs);
> > > > > > if (it_func) {
> > > > > > for (; *it_func; it_func++)
> > > > > > ((void(*)(proto))(*it_func))(args);
> > > > > > }
> > > > > > preempt_enable();
> > > > > >
> > > > > > That is, why can we skip the barrier when !it_func? is that because at
> > > > > > that time we don't actually dereference it_func and therefore cannot
> > > > > > observe stale data?
> > > > > >
> > > > >
> > > > > Exactly. I used the implementation of rcu_assign_pointer as a hint that
> > > > > we did not need barriers when setting the pointer to NULL, and thus we
> > > > > should not need the read barrier when reading the NULL pointer, because
> > > > > it references no data.
> > > > >
> > > > > #define rcu_assign_pointer(p, v) \
> > > > > ({ \
> > > > > if (!__builtin_constant_p(v) || \
> > > > > ((v) != NULL)) \
> > > > > smp_wmb(); \
> > > > > (p) = (v); \
> > > > > })
> > > >
> > > > Yeah, I saw that,.. made me wonder. It basically assumes that when we
> > > > write:
> > > >
> > > > rcu_assign_pointer(foo, NULL);
> > > >
> > > > foo will not be used as an index or offset.
> > > >
> > > > I guess Paul has thought it through and verified all in-kernel use
> > > > cases, but it still makes me feel unconfortable.
> > > >
> > > > > #define rcu_dereference(p) ({ \
> > > > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > > > smp_read_barrier_depends(); \
> > > > > (_________p1); \
> > > > > })
> > > > >
> > > > > But I think you are right, since we are already in unlikely code, using
> > > > > rcu_dereference as you do is better than my use of read barrier depends.
> > > > > It should not change anything in the assembly result except on alpha,
> > > > > where the read_barrier_depends() is not a nop.
> > > > >
> > > > > I wonder if there would be a way to add this kind of NULL pointer case
> > > > > check without overhead in rcu_dereference() on alpha. I guess not, since
> > > > > the pointer is almost never known at compile-time. And I guess Paul must
> > > > > already have thought about it. The only case where we could add this
> > > > > test is when we know that we have a if (ptr != NULL) test following the
> > > > > rcu_dereference(); we could then assume the compiler will merge the two
> > > > > branches since they depend on the same condition.
> > > >
> > > > I remember seeing a thread about all this special casing NULL, but have
> > > > never been able to find it again - my google skillz always fail me.
> > > >
> > > > Basically it doesn't work if you use the variable as an index/offset,
> > > > because in that case 0 is a valid offset and you still generate a data
> > > > dependency.
> > > >
> > > > IIRC the conclusion was that the gains were too small to spend more time
> > > > on it, although I would like to hear about the special case in
> > > > rcu_assign_pointer.
> > > >
> > > > /me goes use git blame....
> > > >
> > >
> > > Actually, we could probably do the following, which also adds an extra
> > > coherency check about non-NULL pointer assumptions :
> > >
> > > #ifdef CONFIG_RCU_DEBUG /* this would be new */
> > > #define DEBUG_RCU_BUG_ON(x) BUG_ON(x)
> > > #else
> > > #define DEBUG_RCU_BUG_ON(x)
> > > #endif
> > >
> > > #define rcu_dereference(p) ({ \
> > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > if (p != NULL) \
> > > smp_read_barrier_depends(); \
> > > (_________p1); \
> > > })
> > >
> > > #define rcu_dereference_non_null(p) ({ \
> > > typeof(p) _________p1 = ACCESS_ONCE(p); \
> > > DEBUG_RCU_BUG_ON(p == NULL); \
> > > smp_read_barrier_depends(); \
> > > (_________p1); \
> > > })
> > >
> > > The use-case where rcu_dereference() would be used is when it is
> > > followed by a null pointer check (grepping through the sources shows me
> > > this is a very very common case). In rare cases, it is assumed that the
> > > pointer is never NULL and it is used just after the rcu_dereference. It
> > > those cases, the extra test could be saved on alpha by using
> > > rcu_dereference_non_null(p), which would check the the pointer is indeed
> > > never NULL under some debug kernel configuration.
> > >
> > > Does it make sense ?
> >
> > This would break the case where the dereferenced variable is used as an
> > index/offset where 0 is a valid value and still generates data
> > dependencies.
> >
> > So if with your new version we do:
> >
> > i = rcu_dereference(foo);
> > j = table[i];
> >
> > which translates into:
> >
> > i = ACCESS_ONCE(foo);
> > if (i)
> > smp_read_barrier_depends();
> > j = table[i];
> >
> > which when i == 0, would fail to do the barrier and can thus cause j to
> > be a wrong value.
> >
> > Sadly I'll have to defer to Paul to explain exactly how that can happen
> > - I always get my head in a horrible twist with this case.
> >
>
> I completely agree with you. However, given the current
> rcu_assign_pointer() implementation, we already have this problem. My
> proposal assumes the current rcu_assign_pointer() behavior is correct
> and that those are never ever used for index/offsets.
>
> We could enforce this as a compile-time check with something along the
> lines of :
>
> #define BUILD_BUG_ON_NOT_OFFSETABLE(x) (void)(x)[0]
>
> And use it both in rcu_assign_pointer() and rcu_dereference(). It would
> check for any type passed to rcu_assign_pointer and rcu_dereference
> which is not either a pointer or an array.
>
> Then if someone really want to shoot himself in the foot by casting a
> pointer to a long after the rcu_deref, that's his problem.
>
> Hrm, looking at rcu_assign_pointer tells me that the ((v) != NULL) test
> should probably already complain if v is not a pointer. So my build test
> is probably unneeded.

Yeah, I was thinking in terms of rcu_dereference() working with both
rcu_assign_pointer() and an as-yet-mythical rcu_assign_index(). Perhaps
this would be a good time to get better names:

Current: rcu_assign_pointer() rcu_dereference()
New Pointers: rcu_publish_pointer() rcu_subscribe_pointer()
New Indexes: rcu_publish_index() rcu_subscribe_index()

And, while I am at it, work in a way of checking for either being in
the appropriate RCU read-side critical section and/or having the
needed lock/mutex/whatever held -- something I believe PeterZ was
prototyping some months back.

Though I still am having a hard time with the conditional in
rcu_dereference() vs. the smp_read_barrier_depends()...

Thanx, Paul

2008-08-02 00:04:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Fri, 2008-08-01 at 14:10 -0700, Paul E. McKenney wrote:

> Yeah, I was thinking in terms of rcu_dereference() working with both
> rcu_assign_pointer() and an as-yet-mythical rcu_assign_index(). Perhaps
> this would be a good time to get better names:
>
> Current: rcu_assign_pointer() rcu_dereference()
> New Pointers: rcu_publish_pointer() rcu_subscribe_pointer()
> New Indexes: rcu_publish_index() rcu_subscribe_index()

Is it really worth the effort, splitting it out into these two cases?

> And, while I am at it, work in a way of checking for either being in
> the appropriate RCU read-side critical section and/or having the
> needed lock/mutex/whatever held -- something I believe PeterZ was
> prototyping some months back.

Yeah - I have (bitrotted a bit, but should be salvageable) lockdep
annotations for rcu_dereference().

The problem with them is the huge amount of false positives.. Take for
example the Radix tree code, its perfectly fine to use the radix tree
code without RCU - say you do the old rwlock style, still it uses
rcu_dereference().

I never figured out a suitable way to annotate that.

2008-08-02 00:17:25

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [patch 01/15] Kernel Tracepoints

On Sat, Aug 02, 2008 at 02:03:57AM +0200, Peter Zijlstra wrote:
> On Fri, 2008-08-01 at 14:10 -0700, Paul E. McKenney wrote:
>
> > Yeah, I was thinking in terms of rcu_dereference() working with both
> > rcu_assign_pointer() and an as-yet-mythical rcu_assign_index(). Perhaps
> > this would be a good time to get better names:
> >
> > Current: rcu_assign_pointer() rcu_dereference()
> > New Pointers: rcu_publish_pointer() rcu_subscribe_pointer()
> > New Indexes: rcu_publish_index() rcu_subscribe_index()
>
> Is it really worth the effort, splitting it out into these two cases?

Either we should split out into the pointer/index cases, or the
definition of rcu_assign_pointer() should probably lose its current
check for NULL...

> > And, while I am at it, work in a way of checking for either being in
> > the appropriate RCU read-side critical section and/or having the
> > needed lock/mutex/whatever held -- something I believe PeterZ was
> > prototyping some months back.
>
> Yeah - I have (bitrotted a bit, but should be salvageable) lockdep
> annotations for rcu_dereference().
>
> The problem with them is the huge amount of false positives.. Take for
> example the Radix tree code, its perfectly fine to use the radix tree
> code without RCU - say you do the old rwlock style, still it uses
> rcu_dereference().
>
> I never figured out a suitable way to annotate that.

My thought was to add a second argument that contained a boolean. If
the rcu_dereference() was either within an RCU read-side critical
section on the one hand or if the boolean evaluated to "true" on the
other, then no assertion. This would require SPIN_LOCK_HELD() or
similar primitives. (And one of the reasons for the renaming

Of course, in the case of radix tree, it would be necessary to pass the
boolean in through the radix-tree read-side APIs, which would perhaps be
a bit annoying.

Thanx, Paul