2010-01-22 13:41:46

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

Adding new subcommand "perf lock" to perf.

I made this patch series on
latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
so please apply this series to perf/core.

Sorry for my slow work, Frederic and Ingo.
I have a lot of rest ToDos,
but finally perf lock can provide minimal functions for analyzing lock statistics.

This patch series has a serious problem,
this have to revert f5a2c3dce03621b55f84496f58adc2d1a87ca16f to work.

... Quoting description at 12/12 ...
This patch is required for making "perf lock rec" work.
The commit f5a2c3dce0 changes write_event() of builtin-record.c .
And changed write_event() sometimes doesn't stop with perf lock rec.

I'm researching what makes write_event() loop infinity,
and noticed that there are some events with event_t->header.size == 0.
But the detail of the problem,
like kind of these events, isn't clear...

If you know something related to this problem,
could you tell me, Arnaldo?

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Tom Zanussi <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>

Hitoshi Mitake (12):
perf tools: Add __data_loc support
perf: Add util/include/linuxhash.h to include hash.h of kernel
lockdep: Add information of file and line where lock inited to struct
lockdep_map
lockdep: Add file and line to initialize sequence of spinlock
lockdep: Add file and line to initialize sequence of rwlock
lockdep: Add file and line to initialize sequence of rwsem
lockdep: Add file and line to initialize sequence of rwsem (x86)
lockdep: Add file and line to initialize sequence of mutex
lockdep: Fix the way to initialize class_mutex for information of
file and line
perf lock: Enhance information of lock trace events
perf lock: New subcommand "perf lock", for analyzing lock statistics
Revert "perf record: Intercept all events"

arch/x86/include/asm/rwsem.h | 9 +-
drivers/base/class.c | 3 +-
include/linux/lockdep.h | 20 +-
include/linux/mutex-debug.h | 2 +-
include/linux/mutex.h | 12 +-
include/linux/rwlock.h | 6 +-
include/linux/rwlock_types.h | 6 +-
include/linux/rwsem-spinlock.h | 11 +-
include/linux/spinlock.h | 6 +-
include/linux/spinlock_types.h | 6 +-
include/trace/events/lock.h | 49 ++-
kernel/lockdep.c | 10 +-
kernel/mutex-debug.c | 5 +-
kernel/mutex-debug.h | 3 +-
kernel/mutex.c | 5 +-
kernel/mutex.h | 2 +-
lib/rwsem-spinlock.c | 5 +-
lib/rwsem.c | 5 +-
lib/spinlock_debug.c | 12 +-
tools/perf/Makefile | 2 +
tools/perf/builtin-lock.c | 720 +++++++++++++++++++++++++++++++++++
tools/perf/builtin-record.c | 28 +-
tools/perf/builtin.h | 1 +
tools/perf/perf.c | 1 +
tools/perf/util/trace-event-parse.c | 7 +
25 files changed, 871 insertions(+), 65 deletions(-)
create mode 100644 tools/perf/builtin-lock.c


2010-01-22 13:39:25

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 02/12] perf: Add util/include/linuxhash.h to include hash.h of kernel

linux/hash.h, hash header of kernel, is also useful for perf.

util/include/linuxhash.h includes linux/hash.h, so we can use
hash facilities (e.g. hash_long()) in perf now.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
tools/perf/Makefile | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index 9b173e6..b2bce1f 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -333,6 +333,7 @@ LIB_FILE=libperf.a
LIB_H += ../../include/linux/perf_event.h
LIB_H += ../../include/linux/rbtree.h
LIB_H += ../../include/linux/list.h
+LIB_H += ../../include/linux/hash.h
LIB_H += ../../include/linux/stringify.h
LIB_H += util/include/linux/bitmap.h
LIB_H += util/include/linux/bitops.h
--
1.6.5.2

2010-01-22 13:39:27

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 06/12] lockdep: Add file and line to initialize sequence of rwsem

Like other ones of this patch series, this patch adds
__FILE__ and __LINE__ to lockdep_map of rw_semaphore.
This patch only affects to architecture independent rwsem.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
include/linux/rwsem-spinlock.h | 11 ++++++++---
lib/rwsem-spinlock.c | 5 +++--
lib/rwsem.c | 5 +++--
3 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/include/linux/rwsem-spinlock.h b/include/linux/rwsem-spinlock.h
index bdfcc25..53f68e6 100644
--- a/include/linux/rwsem-spinlock.h
+++ b/include/linux/rwsem-spinlock.h
@@ -38,7 +38,11 @@ struct rw_semaphore {
};

#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
+# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { \
+ .file = __FILE__, \
+ .line = __LINE__, \
+ .name = #lockname, \
+ }
#else
# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
@@ -51,13 +55,14 @@ struct rw_semaphore {
struct rw_semaphore name = __RWSEM_INITIALIZER(name)

extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key);
+ struct lock_class_key *key,
+ const char *file, unsigned int line);

#define init_rwsem(sem) \
do { \
static struct lock_class_key __key; \
\
- __init_rwsem((sem), #sem, &__key); \
+ __init_rwsem((sem), #sem, &__key, __FILE__, __LINE__); \
} while (0)

extern void __down_read(struct rw_semaphore *sem);
diff --git a/lib/rwsem-spinlock.c b/lib/rwsem-spinlock.c
index ccf95bf..4aceeb4 100644
--- a/lib/rwsem-spinlock.c
+++ b/lib/rwsem-spinlock.c
@@ -34,14 +34,15 @@ EXPORT_SYMBOL(rwsem_is_locked);
* initialise the semaphore
*/
void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ const char *file, unsigned int line)
{
#ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held semaphore:
*/
debug_check_no_locks_freed((void *)sem, sizeof(*sem));
- lockdep_init_map(&sem->dep_map, name, key, 0);
+ __lockdep_init_map(&sem->dep_map, name, key, 0, file, line);
#endif
sem->activity = 0;
spin_lock_init(&sem->wait_lock);
diff --git a/lib/rwsem.c b/lib/rwsem.c
index 3e3365e..3f8d5cd 100644
--- a/lib/rwsem.c
+++ b/lib/rwsem.c
@@ -12,14 +12,15 @@
* Initialize an rwsem:
*/
void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ const char *file, unsigned int line)
{
#ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held semaphore:
*/
debug_check_no_locks_freed((void *)sem, sizeof(*sem));
- lockdep_init_map(&sem->dep_map, name, key, 0);
+ __lockdep_init_map(&sem->dep_map, name, key, 0, file, line);
#endif
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
--
1.6.5.2

2010-01-22 13:39:42

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 05/12] lockdep: Add file and line to initialize sequence of rwlock

rwlock has two way to be initialized.
1: the macros DEFINE_{SPIN,RW}LOCK for statically defined locks
2: the functions {spin_,rw}lock_init() for locks on dynamically allocated memory

This patch modifies these two initialize sequences
for adding __FILE__ and __LINE__ to lockdep_map.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
include/linux/rwlock.h | 6 ++++--
include/linux/rwlock_types.h | 6 +++++-
lib/spinlock_debug.c | 6 ++++--
3 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 71e0b00..cdef180 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -16,12 +16,14 @@

#ifdef CONFIG_DEBUG_SPINLOCK
extern void __rwlock_init(rwlock_t *lock, const char *name,
- struct lock_class_key *key);
+ struct lock_class_key *key,
+ const char *file, unsigned int line);
# define rwlock_init(lock) \
do { \
static struct lock_class_key __key; \
\
- __rwlock_init((lock), #lock, &__key); \
+ __rwlock_init((lock), #lock, &__key, \
+ __FILE__, __LINE__); \
} while (0)
#else
# define rwlock_init(lock) \
diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h
index bd31808..e5e70c4 100644
--- a/include/linux/rwlock_types.h
+++ b/include/linux/rwlock_types.h
@@ -25,7 +25,11 @@ typedef struct {
#define RWLOCK_MAGIC 0xdeaf1eed

#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define RW_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
+# define RW_DEP_MAP_INIT(lockname) .dep_map = { \
+ .name = #lockname, \
+ .file = __FILE__, \
+ .line = __LINE__, \
+ }
#else
# define RW_DEP_MAP_INIT(lockname)
#endif
diff --git a/lib/spinlock_debug.c b/lib/spinlock_debug.c
index 81fa789..1cdae8b 100644
--- a/lib/spinlock_debug.c
+++ b/lib/spinlock_debug.c
@@ -34,14 +34,16 @@ void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
EXPORT_SYMBOL(__raw_spin_lock_init);

void __rwlock_init(rwlock_t *lock, const char *name,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ const char *file, unsigned int line)
{
#ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held lock:
*/
debug_check_no_locks_freed((void *)lock, sizeof(*lock));
- lockdep_init_map(&lock->dep_map, name, key, 0);
+ __lockdep_init_map(&lock->dep_map, name, key, 0,
+ file, line);
#endif
lock->raw_lock = (arch_rwlock_t) __ARCH_RW_LOCK_UNLOCKED;
lock->magic = RWLOCK_MAGIC;
--
1.6.5.2

2010-01-22 13:39:59

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 03/12] lockdep: Add information of file and line where lock inited to struct lockdep_map

__FILE__ and __LINE__ where lock instances initialized are useful information
for profiling lock behaviour.
This patch adds these to struct lockdep_map.

The impact of this patch is making lockdep_init_map() macro.
Former lockdep_init_map() was renamed to __lockdep_init_map().
And lockdep_init_map() is a simple wrapper
to pass __FILE__ and __LINE__ to __lockdep_init_map() now.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
include/linux/lockdep.h | 20 ++++++++++++++++----
kernel/lockdep.c | 10 +++++++---
2 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 9ccf0e2..a631afa 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -134,6 +134,8 @@ struct lockdep_map {
struct lock_class_key *key;
struct lock_class *class_cache;
const char *name;
+ const char *file;
+ unsigned int line;
#ifdef CONFIG_LOCK_STAT
int cpu;
unsigned long ip;
@@ -241,15 +243,25 @@ extern void lockdep_on(void);
* to lockdep:
*/

-extern void lockdep_init_map(struct lockdep_map *lock, const char *name,
- struct lock_class_key *key, int subclass);
+extern void __lockdep_init_map(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key, int subclass,
+ const char *file, unsigned int line);
+
+/*
+ * There's many direct call to __lockdep_init_map() (former lockdep_init_map()),
+ * and these lacks __FILE__ and __LINE__ .
+ * Current lockdep_init_map() is a wrapper for it.
+ */
+#define lockdep_init_map(lock, name, key, subclass) \
+ __lockdep_init_map(lock, name, key, subclass, __FILE__, __LINE__)

/*
* To initialize a lockdep_map statically use this macro.
* Note that _name must not be NULL.
*/
#define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
- { .name = (_name), .key = (void *)(_key), }
+ { .name = (_name), .key = (void *)(_key), \
+ .file = __FILE__, .line = __LINE__}

/*
* Reinitialize a lock key - for cases where there is special locking or
@@ -342,7 +354,7 @@ static inline void lockdep_on(void)
# define lockdep_trace_alloc(g) do { } while (0)
# define lockdep_init() do { } while (0)
# define lockdep_info() do { } while (0)
-# define lockdep_init_map(lock, name, key, sub) \
+# define lockdep_init_map(lock, name, key, sub) \
do { (void)(name); (void)(key); } while (0)
# define lockdep_set_class(lock, key) do { (void)(key); } while (0)
# define lockdep_set_class_and_name(lock, key, name) \
diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index 5feaddc..f0f6dfd 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -2681,8 +2681,9 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
/*
* Initialize a lock instance's lock-class mapping info:
*/
-void lockdep_init_map(struct lockdep_map *lock, const char *name,
- struct lock_class_key *key, int subclass)
+void __lockdep_init_map(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key, int subclass,
+ const char *file, unsigned int line)
{
lock->class_cache = NULL;
#ifdef CONFIG_LOCK_STAT
@@ -2713,8 +2714,11 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,

if (subclass)
register_lock_class(lock, subclass, 1);
+
+ lock->file = file;
+ lock->line = line;
}
-EXPORT_SYMBOL_GPL(lockdep_init_map);
+EXPORT_SYMBOL_GPL(__lockdep_init_map);

/*
* This gets called for every mutex_lock*()/spin_lock*() operation.
--
1.6.5.2

2010-01-22 13:39:23

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 01/12] perf tools: Add __data_loc support

This patch is required to test the next patch for perf lock.

At 064739bc4b3d7f424b2f25547e6611bcf0132415 ,
support for the modifier "__data_loc" of format is added.

But, when I wanted to parse format of lock_acquired (or some
event else), raw_field_ptr() did not returned correct pointer.

So I modified raw_field_ptr() like this patch. Then
raw_field_ptr() works well.

Signed-off-by: Hitoshi Mitake <[email protected]>
Acked-by: Frederic Weisbecker <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Tom Zanussi <[email protected]>
Cc: Steven Rostedt <[email protected]>
---
tools/perf/util/trace-event-parse.c | 7 +++++++
1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/tools/perf/util/trace-event-parse.c b/tools/perf/util/trace-event-parse.c
index c5c32be..13aea80 100644
--- a/tools/perf/util/trace-event-parse.c
+++ b/tools/perf/util/trace-event-parse.c
@@ -1925,6 +1925,13 @@ void *raw_field_ptr(struct event *event, const char *name, void *data)
if (!field)
return NULL;

+ if (field->flags & FIELD_IS_STRING) {
+ int offset;
+ offset = *(int *)(data + field->offset);
+ offset &= 0xffff;
+ return data + offset;
+ }
+
return data + field->offset;
}

--
1.6.5.2

2010-01-22 13:40:15

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 11/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

Adding new subcommand "perf lock" to perf.
I have a lot of rest ToDos,
but now perf lock can provide minimal functions for analyzing lock statistics.

This patch containts some over-80 chars line,
but I judged that removing these lines makes
dirty points for readability.

Example of use:
| ### first, we have to record the trace of lock events
| % sudo ./perf lock rec ./perf bench sched messaging
| # Running sched/messaging benchmark...
| # 20 sender and receiver processes per group
| # 10 groups == 400 processes run
|
| Total time: 6.155 [sec]
| [ perf record: Woken up 0 times to write data ]
| [ perf record: Captured and wrote 50.095 MB perf.data (~2188687 samples) ]
| %

| ### then, we can profile the data obtained by above step
| % sudo ./perf lock prof
| ID Name acquired contended total wait (ns) max wait (ns) min wait (ns)
|
| 0xffff8800b6304348 key 66677 14666 90456871 17742 0
| 0xffff8800b36138a8 &(&sem->wait_loc... 4137 0 9277464 12970 1809
| 0xffff8800b3613db8 &(&inode->i_data... 3689 0 18350881 27603 1867
| 0xffff8800b960fb48 &(&mm->page_tabl... 2269 0 12659866 192088 1893
| 0xffff8800b36138e8 &(&ip->i_lock)->... 2070 0 9580218 193988 1903
| 0xffff8800b8843398 &(&log->l_grant_... 2066 0 7226168 118098 1834
| 0xffff8800b3613ea0 &(&inode->i_data... 1850 0 19337447 3676106 1831
| ...

| ### "perf lock map" dumps detailed information of lock instances
| % sudo ./perf lock map
| 0xffff8800bf1eebb8: &(&p->alloc_lock)->rlock (src: kernel/fork.c, line: 1065)
| 0xffffffff81abcfb8: cpufreq_driver_lock (src: drivers/cpufreq/cpufreq.c, line: 46)
| 0xffff880094354b28: &sb->s_type->i_lock_key (src: fs/inode.c, line: 160)
| 0xffff880094350b28: &sb->s_type->i_lock_key (src: fs/inode.c, line: 160)
| 0xffff880094342b28: &sb->s_type->i_lock_key (src: fs/inode.c, line: 160)
| 0xffff88009433eb28: &sb->s_type->i_lock_key (src: fs/inode.c, line: 160)
| 0xffff880094306b28: &sb->s_type->i_lock_key (src: fs/inode.c, line: 160)
| ...

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
tools/perf/Makefile | 1 +
tools/perf/builtin-lock.c | 720 +++++++++++++++++++++++++++++++++++++++++++++
tools/perf/builtin.h | 1 +
tools/perf/perf.c | 1 +
4 files changed, 723 insertions(+), 0 deletions(-)
create mode 100644 tools/perf/builtin-lock.c

diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index b2bce1f..4296930 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -456,6 +456,7 @@ BUILTIN_OBJS += builtin-top.o
BUILTIN_OBJS += builtin-trace.o
BUILTIN_OBJS += builtin-probe.o
BUILTIN_OBJS += builtin-kmem.o
+BUILTIN_OBJS += builtin-lock.o

PERFLIBS = $(LIB_FILE)

diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
new file mode 100644
index 0000000..99303c6
--- /dev/null
+++ b/tools/perf/builtin-lock.c
@@ -0,0 +1,720 @@
+#include "builtin.h"
+#include "perf.h"
+
+#include "util/util.h"
+#include "util/cache.h"
+#include "util/symbol.h"
+#include "util/thread.h"
+#include "util/header.h"
+
+#include "util/parse-options.h"
+#include "util/trace-event.h"
+
+#include "util/debug.h"
+#include "util/session.h"
+
+#include <sys/types.h>
+#include <sys/prctl.h>
+#include <semaphore.h>
+#include <pthread.h>
+#include <math.h>
+#include <limits.h>
+
+#include <linux/list.h>
+#include <linux/hash.h>
+
+/* based on kernel/lockdep.c */
+#define LOCKHASH_BITS 12
+#define LOCKHASH_SIZE (1UL << LOCKHASH_BITS)
+
+static struct list_head lockhash_table[LOCKHASH_SIZE];
+
+#define __lockhashfn(key) hash_long((unsigned long)key, LOCKHASH_BITS)
+#define lockhashentry(key) (lockhash_table + __lockhashfn((key)))
+
+#define LOCK_STATE_UNLOCKED 0 /* initial state */
+#define LOCK_STATE_LOCKED 1
+
+struct lock_stat {
+ struct list_head hash_entry;
+ struct rb_node rb; /* used for sorting */
+
+ /* FIXME: raw_field_value() returns unsigned long long,
+ * so address of lockdep_map should be dealed as 64bit.
+ * Is there more better solution? */
+ void *addr; /* address of lockdep_map, used as ID */
+ char *name; /* for strcpy(), we cannot use const */
+ char *file;
+ unsigned int line;
+
+ int state;
+ u64 prev_event_time; /* timestamp of previous event */
+
+ unsigned int nr_acquired;
+ unsigned int nr_acquire;
+ unsigned int nr_contended;
+ unsigned int nr_release;
+
+ /* these times are in nano sec. */
+ u64 wait_time_total;
+ u64 wait_time_min;
+ u64 wait_time_max;
+};
+
+/* build simple key function one is bigger than two */
+#define SINGLE_KEY(member) \
+ static int lock_stat_key_ ## member(struct lock_stat *one, \
+ struct lock_stat *two) \
+ { \
+ return one->member > two->member; \
+ }
+
+SINGLE_KEY(nr_acquired)
+SINGLE_KEY(nr_contended)
+SINGLE_KEY(wait_time_total)
+SINGLE_KEY(wait_time_min)
+SINGLE_KEY(wait_time_max)
+
+struct lock_key {
+ /*
+ * name: the value for specify by user
+ * this should be simpler than raw name of member
+ * e.g. nr_acquired -> acquired, wait_time_total -> wait_total
+ */
+ const char *name;
+ int (*key)(struct lock_stat*, struct lock_stat*);
+};
+
+static const char *sort_key = "acquired";
+static int (*compare)(struct lock_stat *, struct lock_stat *);
+
+#define DEF_KEY_LOCK(name, fn_suffix) \
+ { #name, lock_stat_key_ ## fn_suffix }
+struct lock_key keys[] = {
+ DEF_KEY_LOCK(acquired, nr_acquired),
+ DEF_KEY_LOCK(contended, nr_contended),
+ DEF_KEY_LOCK(wait_total, wait_time_total),
+ DEF_KEY_LOCK(wait_min, wait_time_min),
+ DEF_KEY_LOCK(wait_max, wait_time_max),
+
+ /* extra comparisons much complicated should be here */
+
+ { NULL, NULL }
+};
+
+static void select_key(void)
+{
+ int i;
+
+ for (i = 0; keys[i].name; i++) {
+ if (!strcmp(keys[i].name, sort_key)) {
+ compare = keys[i].key;
+ return;
+ }
+ }
+
+ die("Unknown compare key:%s\n", sort_key);
+}
+
+static struct rb_root result; /* place to store sorted data */
+
+static void insert_to_result(struct lock_stat *st,
+ int (*bigger)(struct lock_stat *,
+ struct lock_stat *))
+{
+ struct rb_node **rb = &result.rb_node;
+ struct rb_node *parent = NULL;
+ struct lock_stat *p;
+
+ while (*rb) {
+ p = container_of(*rb, struct lock_stat, rb);
+ parent = *rb;
+
+ if (bigger(st, p))
+ rb = &(*rb)->rb_left;
+ else
+ rb = &(*rb)->rb_right;
+ }
+
+ rb_link_node(&st->rb, parent, rb);
+ rb_insert_color(&st->rb, &result);
+}
+
+/* returns left most element of result, and erase it */
+static struct lock_stat *pop_from_result(void)
+{
+ struct rb_node *node = result.rb_node;
+
+ if (!node)
+ return NULL;
+
+ while (node->rb_left)
+ node = node->rb_left;
+
+ rb_erase(node, &result);
+ return container_of(node, struct lock_stat, rb);
+}
+
+static struct lock_stat *lock_stat_findnew(void *addr, const char *name,
+ const char *file, unsigned int line)
+{
+ struct list_head *entry = lockhashentry(addr);
+ struct lock_stat *ret, *new;
+
+ list_for_each_entry(ret, entry, hash_entry) {
+ if (ret->addr == addr)
+ return ret;
+ }
+
+ new = zalloc(sizeof(struct lock_stat));
+ if (!new)
+ goto alloc_failed;
+
+ new->addr = addr;
+ new->name = zalloc(sizeof(char) * strlen(name) + 1);
+ if (!new->name)
+ goto alloc_failed;
+ strcpy(new->name, name);
+ new->file = zalloc(sizeof(char) * strlen(file) + 1);
+ if (!new->file)
+ goto alloc_failed;
+ strcpy(new->file, file);
+ new->line = line;
+
+ /* LOCK_STATE_UNLOCKED == 0 isn't guaranteed forever */
+ new->state = LOCK_STATE_UNLOCKED;
+ new->wait_time_min = ULLONG_MAX;
+
+ list_add(&new->hash_entry, entry);
+ return new;
+
+alloc_failed:
+ die("memory allocation failed\n");
+}
+
+static char const *input_name = "perf.data";
+
+static int profile_cpu = -1;
+
+struct raw_event_sample {
+ u32 size;
+ char data[0];
+};
+
+struct trace_acquire_event {
+ void *addr;
+ const char *name;
+ const char *file;
+ unsigned int line;
+};
+
+struct trace_acquired_event {
+ void *addr;
+ const char *name;
+ const char *file;
+ unsigned int line;
+};
+
+struct trace_contended_event {
+ void *addr;
+ const char *name;
+ const char *file;
+ unsigned int line;
+};
+
+struct trace_release_event {
+ void *addr;
+ const char *name;
+ const char *file;
+ unsigned int line;
+};
+
+struct trace_lock_handler {
+ void (*acquire_event)(struct trace_acquire_event *,
+ struct event *,
+ int cpu,
+ u64 timestamp,
+ struct thread *thread);
+
+ void (*acquired_event)(struct trace_acquired_event *,
+ struct event *,
+ int cpu,
+ u64 timestamp,
+ struct thread *thread);
+
+ void (*contended_event)(struct trace_contended_event *,
+ struct event *,
+ int cpu,
+ u64 timestamp,
+ struct thread *thread);
+
+ void (*release_event)(struct trace_release_event *,
+ struct event *,
+ int cpu,
+ u64 timestamp,
+ struct thread *thread);
+};
+
+static void prof_lock_acquire_event(struct trace_acquire_event *acquire_event,
+ struct event *__event __used,
+ int cpu __used,
+ u64 timestamp,
+ struct thread *thread __used)
+{
+ struct lock_stat *st;
+
+ st = lock_stat_findnew(acquire_event->addr, acquire_event->name,
+ acquire_event->file, acquire_event->line);
+
+ switch (st->state) {
+ case LOCK_STATE_UNLOCKED:
+ break;
+ case LOCK_STATE_LOCKED:
+ break;
+ default:
+ BUG_ON(1);
+ break;
+ }
+
+ st->prev_event_time = timestamp;
+}
+
+static void prof_lock_acquired_event(struct trace_acquired_event *acquired_event,
+ struct event *__event __used,
+ int cpu __used,
+ u64 timestamp,
+ struct thread *thread __used)
+{
+ struct lock_stat *st;
+
+ st = lock_stat_findnew(acquired_event->addr, acquired_event->name,
+ acquired_event->file, acquired_event->line);
+
+ switch (st->state) {
+ case LOCK_STATE_UNLOCKED:
+ st->state = LOCK_STATE_LOCKED;
+ st->nr_acquired++;
+ break;
+ case LOCK_STATE_LOCKED:
+ break;
+ default:
+ BUG_ON(1);
+ break;
+ }
+
+ st->prev_event_time = timestamp;
+}
+
+static void prof_lock_contended_event(struct trace_contended_event *contended_event,
+ struct event *__event __used,
+ int cpu __used,
+ u64 timestamp,
+ struct thread *thread __used)
+{
+ struct lock_stat *st;
+
+ st = lock_stat_findnew(contended_event->addr, contended_event->name,
+ contended_event->file, contended_event->line);
+
+ switch (st->state) {
+ case LOCK_STATE_UNLOCKED:
+ break;
+ case LOCK_STATE_LOCKED:
+ st->nr_contended++;
+ break;
+ default:
+ BUG_ON(1);
+ break;
+ }
+
+ st->prev_event_time = timestamp;
+}
+
+static void prof_lock_release_event(struct trace_release_event *release_event,
+ struct event *__event __used,
+ int cpu __used,
+ u64 timestamp,
+ struct thread *thread __used)
+{
+ struct lock_stat *st;
+ u64 hold_time;
+
+ st = lock_stat_findnew(release_event->addr, release_event->name,
+ release_event->file, release_event->line);
+
+ switch (st->state) {
+ case LOCK_STATE_UNLOCKED:
+ break;
+ case LOCK_STATE_LOCKED:
+ st->state = LOCK_STATE_UNLOCKED;
+ hold_time = timestamp - st->prev_event_time;
+
+ if (timestamp < st->prev_event_time) {
+ /* terribly, this can happen... */
+ goto end;
+ }
+
+ if (st->wait_time_min > hold_time)
+ st->wait_time_min = hold_time;
+ if (st->wait_time_max < hold_time)
+ st->wait_time_max = hold_time;
+ st->wait_time_total += hold_time;
+
+ st->nr_release++;
+ break;
+ default:
+ BUG_ON(1);
+ break;
+ }
+
+end:
+ st->prev_event_time = timestamp;
+}
+
+/* lock oriented handlers */
+/* TODO: handlers for CPU oriented, thread oriented */
+static struct trace_lock_handler prof_lock_ops = {
+ .acquire_event = prof_lock_acquire_event,
+ .acquired_event = prof_lock_acquired_event,
+ .contended_event = prof_lock_contended_event,
+ .release_event = prof_lock_release_event,
+};
+
+static struct trace_lock_handler *trace_handler;
+
+static void
+process_lock_acquire_event(void *data,
+ struct event *event __used,
+ int cpu __used,
+ u64 timestamp __used,
+ struct thread *thread __used)
+{
+ struct trace_acquire_event acquire_event;
+ u64 tmp; /* this is required for casting... */
+
+ tmp = raw_field_value(event, "lockdep_addr", data);
+ acquire_event.addr = (void *)tmp;
+ acquire_event.name = (char *)raw_field_ptr(event, "name", data);
+ acquire_event.file = (char *)raw_field_ptr(event, "file", data);
+ acquire_event.line =
+ (unsigned int)raw_field_value(event, "line", data);
+
+ if (trace_handler->acquire_event) {
+ trace_handler->acquire_event(&acquire_event,
+ event, cpu, timestamp, thread);
+ }
+}
+
+static void
+process_lock_acquired_event(void *data,
+ struct event *event __used,
+ int cpu __used,
+ u64 timestamp __used,
+ struct thread *thread __used)
+{
+ struct trace_acquired_event acquired_event;
+ u64 tmp; /* this is required for casting... */
+
+ tmp = raw_field_value(event, "lockdep_addr", data);
+ acquired_event.addr = (void *)tmp;
+ acquired_event.name = (char *)raw_field_ptr(event, "name", data);
+ acquired_event.file = (char *)raw_field_ptr(event, "file", data);
+ acquired_event.line =
+ (unsigned int)raw_field_value(event, "line", data);
+
+ if (trace_handler->acquire_event) {
+ trace_handler->acquired_event(&acquired_event,
+ event, cpu, timestamp, thread);
+ }
+}
+
+static void
+process_lock_contended_event(void *data,
+ struct event *event __used,
+ int cpu __used,
+ u64 timestamp __used,
+ struct thread *thread __used)
+{
+ struct trace_contended_event contended_event;
+ u64 tmp; /* this is required for casting... */
+
+ tmp = raw_field_value(event, "lockdep_addr", data);
+ contended_event.addr = (void *)tmp;
+ contended_event.name = (char *)raw_field_ptr(event, "name", data);
+ contended_event.file = (char *)raw_field_ptr(event, "file", data);
+ contended_event.line =
+ (unsigned int)raw_field_value(event, "line", data);
+
+ if (trace_handler->acquire_event) {
+ trace_handler->contended_event(&contended_event,
+ event, cpu, timestamp, thread);
+ }
+}
+
+static void
+process_lock_release_event(void *data,
+ struct event *event __used,
+ int cpu __used,
+ u64 timestamp __used,
+ struct thread *thread __used)
+{
+ struct trace_release_event release_event;
+ u64 tmp; /* this is required for casting... */
+
+ tmp = raw_field_value(event, "lockdep_addr", data);
+ release_event.addr = (void *)tmp;
+ release_event.name = (char *)raw_field_ptr(event, "name", data);
+ release_event.file = (char *)raw_field_ptr(event, "file", data);
+ release_event.line =
+ (unsigned int)raw_field_value(event, "line", data);
+
+ if (trace_handler->acquire_event) {
+ trace_handler->release_event(&release_event,
+ event, cpu, timestamp, thread);
+ }
+}
+
+static void
+process_raw_event(void *data, int cpu,
+ u64 timestamp, struct thread *thread)
+{
+ struct event *event;
+ int type;
+
+ type = trace_parse_common_type(data);
+ event = trace_find_event(type);
+
+ if (!strcmp(event->name, "lock_acquire"))
+ process_lock_acquire_event(data, event, cpu, timestamp, thread);
+ if (!strcmp(event->name, "lock_acquired"))
+ process_lock_acquired_event(data, event, cpu, timestamp, thread);
+ if (!strcmp(event->name, "lock_contended"))
+ process_lock_contended_event(data, event, cpu, timestamp, thread);
+ if (!strcmp(event->name, "lock_release"))
+ process_lock_release_event(data, event, cpu, timestamp, thread);
+}
+
+static int process_sample_event(event_t *event, struct perf_session *session)
+{
+ struct thread *thread;
+ struct sample_data data;
+
+ bzero(&data, sizeof(struct sample_data));
+ event__parse_sample(event, session->sample_type, &data);
+ thread = perf_session__findnew(session, data.pid);
+
+ dump_printf("(IP, %d): %d/%d: %p period: %llu\n",
+ event->header.misc,
+ data.pid, data.tid, (void *)data.ip, data.period);
+
+ if (thread == NULL) {
+ pr_debug("problem processing %d event, skipping it.\n",
+ event->header.type);
+ return -1;
+ }
+
+ dump_printf(" ... thread: %s:%d\n", thread->comm, thread->pid);
+
+ if (profile_cpu != -1 && profile_cpu != (int) data.cpu)
+ return 0;
+
+ process_raw_event(data.raw_data, data.cpu, data.time, thread);
+
+ return 0;
+}
+
+/* TODO: various way to print, coloring, nano or milli sec */
+static void print_result(void)
+{
+ struct lock_stat *st;
+ char cut_name[20];
+
+ printf("%18s ", "ID");
+ printf("%20s ", "Name");
+ printf("%10s ", "acquired");
+ printf("%10s ", "contended");
+
+ printf("%15s ", "total wait (ns)");
+ printf("%15s ", "max wait (ns)");
+ printf("%15s ", "min wait (ns)");
+
+ printf("\n\n");
+
+ while ((st = pop_from_result())) {
+ bzero(cut_name, 20);
+
+ printf("%p ", st->addr);
+
+ if (strlen(st->name) < 16) {
+ /* output raw name */
+ printf("%20s ", st->name);
+ } else {
+ strncpy(cut_name, st->name, 16);
+ cut_name[16] = '.';
+ cut_name[17] = '.';
+ cut_name[18] = '.';
+ cut_name[19] = '\0';
+ /* cut off name for saving output style */
+ printf("%20s ", cut_name);
+ }
+
+ printf("%10u ", st->nr_acquired);
+ printf("%10u ", st->nr_contended);
+
+ printf("%15llu ", st->wait_time_total);
+ printf("%15llu ", st->wait_time_max);
+ printf("%15llu ", st->wait_time_min == ULLONG_MAX ?
+ 0 : st->wait_time_min);
+ printf("\n");
+ }
+}
+
+static void dump_map(void)
+{
+ unsigned int i;
+ struct lock_stat *st;
+
+ for (i = 0; i < LOCKHASH_SIZE; i++) {
+ list_for_each_entry(st, &lockhash_table[i], hash_entry) {
+ printf("%p: %s (src: %s, line: %u)\n",
+ st->addr, st->name, st->file, st->line);
+ }
+ }
+}
+
+static struct perf_event_ops eops = {
+ .sample = process_sample_event,
+ .comm = event__process_comm,
+};
+
+static struct perf_session *session;
+
+static int read_events(void)
+{
+ session = perf_session__new(input_name, O_RDONLY, 0);
+ if (!session)
+ die("Initializing perf session failed\n");
+
+ return perf_session__process_events(session, &eops);
+}
+
+static void sort_result(void)
+{
+ unsigned int i;
+ struct lock_stat *st;
+
+ for (i = 0; i < LOCKHASH_SIZE; i++) {
+ list_for_each_entry(st, &lockhash_table[i], hash_entry) {
+ insert_to_result(st, compare);
+ }
+ }
+}
+
+static void __cmd_prof(void)
+{
+ setup_pager();
+ select_key();
+ read_events();
+ sort_result();
+ print_result();
+}
+
+static const char * const prof_usage[] = {
+ "perf sched prof [<options>]",
+ NULL
+};
+
+static const struct option prof_options[] = {
+ OPT_STRING('k', "key", &sort_key, "acquired",
+ "key for sorting"),
+ /* TODO: type */
+ OPT_END()
+};
+
+static const char * const lock_usage[] = {
+ "perf lock [<options>] {record|trace|prof}",
+ NULL
+};
+
+static const struct option lock_options[] = {
+ OPT_STRING('i', "input", &input_name, "file",
+ "input file name"),
+ OPT_BOOLEAN('v', "verbose", &verbose,
+ "be more verbose (show symbol address, etc)"),
+ OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
+ "dump raw trace in ASCII"),
+ OPT_END()
+};
+
+static const char *record_args[] = {
+ "record",
+ "-a",
+ "-R",
+ "-M",
+ "-f",
+ "-m", "1024",
+ "-c", "1",
+ "-e", "lock:lock_acquire:r",
+ "-e", "lock:lock_acquired:r",
+ "-e", "lock:lock_contended:r",
+ "-e", "lock:lock_release:r",
+};
+
+static int __cmd_record(int argc, const char **argv)
+{
+ unsigned int rec_argc, i, j;
+ const char **rec_argv;
+
+ rec_argc = ARRAY_SIZE(record_args) + argc - 1;
+ rec_argv = calloc(rec_argc + 1, sizeof(char *));
+
+ for (i = 0; i < ARRAY_SIZE(record_args); i++)
+ rec_argv[i] = strdup(record_args[i]);
+
+ for (j = 1; j < (unsigned int)argc; j++, i++)
+ rec_argv[i] = argv[j];
+
+ BUG_ON(i != rec_argc);
+
+ return cmd_record(i, rec_argv, NULL);
+}
+
+int cmd_lock(int argc, const char **argv, const char *prefix __used)
+{
+ unsigned int i;
+
+ symbol__init();
+ for (i = 0; i < LOCKHASH_SIZE; i++)
+ INIT_LIST_HEAD(lockhash_table + i);
+
+ argc = parse_options(argc, argv, lock_options, lock_usage,
+ PARSE_OPT_STOP_AT_NON_OPTION);
+ if (!argc)
+ usage_with_options(lock_usage, lock_options);
+
+ if (!strncmp(argv[0], "rec", 3)) {
+ return __cmd_record(argc, argv);
+ } else if (!strncmp(argv[0], "prof", 4)) {
+ trace_handler = &prof_lock_ops;
+ if (argc) {
+ argc = parse_options(argc, argv,
+ prof_options, prof_usage, 0);
+ if (argc)
+ usage_with_options(prof_usage, prof_options);
+ }
+ __cmd_prof();
+ } else if (!strcmp(argv[0], "trace")) {
+ /* Aliased to 'perf trace' */
+ return cmd_trace(argc, argv, prefix);
+ } else if (!strcmp(argv[0], "map")) {
+ /* recycling prof_lock_ops */
+ trace_handler = &prof_lock_ops;
+ setup_pager();
+ read_events();
+ dump_map();
+ } else {
+ usage_with_options(lock_usage, lock_options);
+ }
+
+ return 0;
+}
diff --git a/tools/perf/builtin.h b/tools/perf/builtin.h
index dee97cf..10fe49e 100644
--- a/tools/perf/builtin.h
+++ b/tools/perf/builtin.h
@@ -31,5 +31,6 @@ extern int cmd_trace(int argc, const char **argv, const char *prefix);
extern int cmd_version(int argc, const char **argv, const char *prefix);
extern int cmd_probe(int argc, const char **argv, const char *prefix);
extern int cmd_kmem(int argc, const char **argv, const char *prefix);
+extern int cmd_lock(int argc, const char **argv, const char *prefix);

#endif
diff --git a/tools/perf/perf.c b/tools/perf/perf.c
index 05c861c..d5a29e6 100644
--- a/tools/perf/perf.c
+++ b/tools/perf/perf.c
@@ -302,6 +302,7 @@ static void handle_internal_command(int argc, const char **argv)
{ "sched", cmd_sched, 0 },
{ "probe", cmd_probe, 0 },
{ "kmem", cmd_kmem, 0 },
+ { "lock", cmd_lock, 0 },
};
unsigned int i;
static const char ext[] = STRIP_EXTENSION;
--
1.6.5.2

2010-01-22 13:40:40

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 10/12] perf lock: Enhance information of lock trace events

Now lock trace events have address and __FILE__ and __LINE__ of their lockdep_map.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
include/trace/events/lock.h | 49 ++++++++++++++++++++++++++++++++++--------
1 files changed, 39 insertions(+), 10 deletions(-)

diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h
index a870ba1..3eef226 100644
--- a/include/trace/events/lock.h
+++ b/include/trace/events/lock.h
@@ -20,16 +20,23 @@ TRACE_EVENT(lock_acquire,
TP_STRUCT__entry(
__field(unsigned int, flags)
__string(name, lock->name)
+ __field(void *, lockdep_addr)
+ __string(file, lock->file)
+ __field(unsigned int, line)
),

TP_fast_assign(
__entry->flags = (trylock ? 1 : 0) | (read ? 2 : 0);
__assign_str(name, lock->name);
+ __entry->lockdep_addr = lock;
+ __assign_str(file, lock->file);
+ __entry->line = lock->line;
),

- TP_printk("%s%s%s", (__entry->flags & 1) ? "try " : "",
+ TP_printk("%p %s%s%s %s:%u", __entry->lockdep_addr,
+ (__entry->flags & 1) ? "try " : "",
(__entry->flags & 2) ? "read " : "",
- __get_str(name))
+ __get_str(name), __get_str(file), __entry->line)
);

TRACE_EVENT(lock_release,
@@ -40,13 +47,21 @@ TRACE_EVENT(lock_release,

TP_STRUCT__entry(
__string(name, lock->name)
+ __field(void *, lockdep_addr)
+ __string(file, lock->file)
+ __field(unsigned int, line)
),

TP_fast_assign(
__assign_str(name, lock->name);
+ __entry->lockdep_addr = lock;
+ __assign_str(file, lock->file);
+ __entry->line = lock->line;
),

- TP_printk("%s", __get_str(name))
+ TP_printk("%p %s %s:%u",
+ __entry->lockdep_addr, __get_str(name),
+ __get_str(file), __entry->line)
);

#ifdef CONFIG_LOCK_STAT
@@ -59,13 +74,21 @@ TRACE_EVENT(lock_contended,

TP_STRUCT__entry(
__string(name, lock->name)
+ __field(void *, lockdep_addr)
+ __string(file, lock->file)
+ __field(unsigned int, line)
),

TP_fast_assign(
__assign_str(name, lock->name);
+ __entry->lockdep_addr = lock;
+ __assign_str(file, lock->file);
+ __entry->line = lock->line;
),

- TP_printk("%s", __get_str(name))
+ TP_printk("%p %s %s:%u",
+ __entry->lockdep_addr, __get_str(name),
+ __get_str(file), __entry->line)
);

TRACE_EVENT(lock_acquired,
@@ -75,16 +98,22 @@ TRACE_EVENT(lock_acquired,

TP_STRUCT__entry(
__string(name, lock->name)
- __field(unsigned long, wait_usec)
- __field(unsigned long, wait_nsec_rem)
+ __field(s64, wait_nsec)
+ __field(void *, lockdep_addr)
+ __string(file, lock->file)
+ __field(unsigned int, line)
),
+
TP_fast_assign(
__assign_str(name, lock->name);
- __entry->wait_nsec_rem = do_div(waittime, NSEC_PER_USEC);
- __entry->wait_usec = (unsigned long) waittime;
+ __entry->wait_nsec = waittime;
+ __entry->lockdep_addr = lock;
+ __assign_str(file, lock->file);
+ __entry->line = lock->line;
),
- TP_printk("%s (%lu.%03lu us)", __get_str(name), __entry->wait_usec,
- __entry->wait_nsec_rem)
+ TP_printk("%p %s %s:%u (%llu ns)", __entry->lockdep_addr,
+ __get_str(name), __get_str(file), __entry->line,
+ __entry->wait_nsec)
);

#endif
--
1.6.5.2

2010-01-22 13:40:45

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 08/12] lockdep: Add file and line to initialize sequence of mutex

Like other ones of this patch series, this patch adds
__FILE__ and __LINE__ to lockdep_map of mutex.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
include/linux/mutex-debug.h | 2 +-
include/linux/mutex.h | 12 +++++++++---
kernel/mutex-debug.c | 5 +++--
kernel/mutex-debug.h | 3 ++-
kernel/mutex.c | 5 +++--
kernel/mutex.h | 2 +-
6 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/include/linux/mutex-debug.h b/include/linux/mutex-debug.h
index 731d77d..f86bf4e 100644
--- a/include/linux/mutex-debug.h
+++ b/include/linux/mutex-debug.h
@@ -15,7 +15,7 @@
do { \
static struct lock_class_key __key; \
\
- __mutex_init((mutex), #mutex, &__key); \
+ __mutex_init((mutex), #mutex, &__key, __FILE__, __LINE__); \
} while (0)

extern void mutex_destroy(struct mutex *lock);
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 878cab4..ce9082a 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -82,14 +82,19 @@ struct mutex_waiter {
do { \
static struct lock_class_key __key; \
\
- __mutex_init((mutex), #mutex, &__key); \
+ __mutex_init((mutex), #mutex, &__key, \
+ __FILE__, __LINE__); \
} while (0)
# define mutex_destroy(mutex) do { } while (0)
#endif

#ifdef CONFIG_DEBUG_LOCK_ALLOC
# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
- , .dep_map = { .name = #lockname }
+ , .dep_map = { \
+ .file = __FILE__, \
+ .line = __LINE__, \
+ .name = #lockname, \
+ }
#else
# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
#endif
@@ -105,7 +110,8 @@ do { \
struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)

extern void __mutex_init(struct mutex *lock, const char *name,
- struct lock_class_key *key);
+ struct lock_class_key *key,
+ const char *file, unsigned int line);

/**
* mutex_is_locked - is the mutex locked
diff --git a/kernel/mutex-debug.c b/kernel/mutex-debug.c
index ec815a9..4a1321b 100644
--- a/kernel/mutex-debug.c
+++ b/kernel/mutex-debug.c
@@ -81,14 +81,15 @@ void debug_mutex_unlock(struct mutex *lock)
}

void debug_mutex_init(struct mutex *lock, const char *name,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ const char *file, unsigned int line)
{
#ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held lock:
*/
debug_check_no_locks_freed((void *)lock, sizeof(*lock));
- lockdep_init_map(&lock->dep_map, name, key, 0);
+ __lockdep_init_map(&lock->dep_map, name, key, 0, file, line);
#endif
lock->magic = lock;
}
diff --git a/kernel/mutex-debug.h b/kernel/mutex-debug.h
index 57d527a..154a909 100644
--- a/kernel/mutex-debug.h
+++ b/kernel/mutex-debug.h
@@ -25,7 +25,8 @@ extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct thread_info *ti);
extern void debug_mutex_unlock(struct mutex *lock);
extern void debug_mutex_init(struct mutex *lock, const char *name,
- struct lock_class_key *key);
+ struct lock_class_key *key,
+ const char *file, unsigned int line);

static inline void mutex_set_owner(struct mutex *lock)
{
diff --git a/kernel/mutex.c b/kernel/mutex.c
index 632f04c..80e3876 100644
--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -46,14 +46,15 @@
* It is not allowed to initialize an already locked mutex.
*/
void
-__mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
+__mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key,
+ const char *file, unsigned int line)
{
atomic_set(&lock->count, 1);
spin_lock_init(&lock->wait_lock);
INIT_LIST_HEAD(&lock->wait_list);
mutex_clear_owner(lock);

- debug_mutex_init(lock, name, key);
+ debug_mutex_init(lock, name, key, file, line);
}

EXPORT_SYMBOL(__mutex_init);
diff --git a/kernel/mutex.h b/kernel/mutex.h
index 67578ca..81914e6 100644
--- a/kernel/mutex.h
+++ b/kernel/mutex.h
@@ -40,7 +40,7 @@ static inline void mutex_clear_owner(struct mutex *lock)
#define debug_mutex_free_waiter(waiter) do { } while (0)
#define debug_mutex_add_waiter(lock, waiter, ti) do { } while (0)
#define debug_mutex_unlock(lock) do { } while (0)
-#define debug_mutex_init(lock, name, key) do { } while (0)
+#define debug_mutex_init(lock, name, key, file, line) do { } while (0)

static inline void
debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter)
--
1.6.5.2

2010-01-22 13:40:59

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 12/12] Revert "perf record: Intercept all events"

This reverts commit f5a2c3dce03621b55f84496f58adc2d1a87ca16f.

This patch is required for making "perf lock rec" work.
The commit f5a2c3dce0 changes write_event() of builtin-record.c .
And changed write_event() sometimes doesn't stop with perf lock rec.

I'm researching what makes write_event() loop infinity,
and noticed that there are some events with event_t->header.size == 0.
But the detail of the problem,
like kind of these events, isn't clear...

If you know something related to this problem,
could you tell me, Arnaldo?

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Frédéric Weisbecker <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
---
tools/perf/builtin-record.c | 28 ++++++++++------------------
1 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 7bb9ca1..614fa9a 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -113,24 +113,16 @@ static void write_output(void *buf, size_t size)

static void write_event(event_t *buf, size_t size)
{
- size_t processed_size = buf->header.size;
- event_t *ev = buf;
-
- do {
- /*
- * Add it to the list of DSOs, so that when we finish this
- * record session we can pick the available build-ids.
- */
- if (ev->header.type == PERF_RECORD_MMAP) {
- struct list_head *head = &dsos__user;
- if (ev->header.misc == 1)
- head = &dsos__kernel;
- __dsos__findnew(head, ev->mmap.filename);
- }
-
- ev = ((void *)ev) + ev->header.size;
- processed_size += ev->header.size;
- } while (processed_size < size);
+ /*
+ * Add it to the list of DSOs, so that when we finish this
+ * record session we can pick the available build-ids.
+ */
+ if (buf->header.type == PERF_RECORD_MMAP) {
+ struct list_head *head = &dsos__user;
+ if (buf->mmap.header.misc == 1)
+ head = &dsos__kernel;
+ __dsos__findnew(head, buf->mmap.filename);
+ }

write_output(buf, size);
}
--
1.6.5.2

2010-01-22 13:41:25

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 09/12] lockdep: Fix the way to initialize class_mutex for information of file and line

This patch adds __FILE__ and __LINE__ to arguments passed for
__mutex_init() called in __class_register().
This affects to lockdep_map of class_mutex of struct class_private,
and these will be used to make trace lock events more human friendly.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
---
drivers/base/class.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/base/class.c b/drivers/base/class.c
index 161746d..cc0630f 100644
--- a/drivers/base/class.c
+++ b/drivers/base/class.c
@@ -162,7 +162,8 @@ int __class_register(struct class *cls, struct lock_class_key *key)
klist_init(&cp->class_devices, klist_class_dev_get, klist_class_dev_put);
INIT_LIST_HEAD(&cp->class_interfaces);
kset_init(&cp->class_dirs);
- __mutex_init(&cp->class_mutex, "struct class mutex", key);
+ __mutex_init(&cp->class_mutex, "struct class mutex", key,
+ __FILE__, __LINE__);
error = kobject_set_name(&cp->class_subsys.kobj, "%s", cls->name);
if (error) {
kfree(cp);
--
1.6.5.2

2010-01-22 13:41:32

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 07/12] lockdep: Add file and line to initialize sequence of rwsem (x86)

This patch adds __FILE__ and __LINE__ to lockdep_map of rw_semaphore.
This patch only affects to x86 architecture dependent rwsem.

If this patch is accepted, I'll prepare and send the patch
adds __FILE__ and __LINE__ to rw_semaphore of other architectures
which provides architecture dependent rwsem.h .

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Thomas Gleixner <[email protected]>
---
arch/x86/include/asm/rwsem.h | 9 +++++++--
1 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index ca7517d..24cbe3a 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -74,7 +74,12 @@ struct rw_semaphore {
};

#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
+# define __RWSEM_DEP_MAP_INIT(lockname) \
+ , .dep_map = { \
+ .file = __FILE__, \
+ .line = __LINE__, \
+ .name = #lockname, \
+ }
#else
# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
@@ -96,7 +101,7 @@ extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
do { \
static struct lock_class_key __key; \
\
- __init_rwsem((sem), #sem, &__key); \
+ __init_rwsem((sem), #sem, &__key, __FILE__, __LINE__); \
} while (0)

/*
--
1.6.5.2

2010-01-22 13:42:10

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH 04/12] lockdep: Add file and line to initialize sequence of spinlock

spinlock has two way to be initialized.
1: the macros DEFINE_{SPIN,RW}LOCK for statically defined locks
2: the functions {spin_,rw}lock_init() for locks on dynamically allocated memory

This patch modifies these two initialize sequences
for adding __FILE__ and __LINE__ to lockdep_map.

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
include/linux/spinlock.h | 6 ++++--
include/linux/spinlock_types.h | 6 +++++-
lib/spinlock_debug.c | 6 ++++--
3 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 8608821..f8b8363 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -90,12 +90,14 @@

#ifdef CONFIG_DEBUG_SPINLOCK
extern void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
- struct lock_class_key *key);
+ struct lock_class_key *key,
+ const char *file, unsigned int line);
# define raw_spin_lock_init(lock) \
do { \
static struct lock_class_key __key; \
\
- __raw_spin_lock_init((lock), #lock, &__key); \
+ __raw_spin_lock_init((lock), #lock, &__key, \
+ __FILE__, __LINE__); \
} while (0)

#else
diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h
index 851b778..6f2d558 100644
--- a/include/linux/spinlock_types.h
+++ b/include/linux/spinlock_types.h
@@ -36,7 +36,11 @@ typedef struct raw_spinlock {
#define SPINLOCK_OWNER_INIT ((void *)-1L)

#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
+# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { \
+ .name = #lockname, \
+ .file = __FILE__, \
+ .line = __LINE__, \
+ }
#else
# define SPIN_DEP_MAP_INIT(lockname)
#endif
diff --git a/lib/spinlock_debug.c b/lib/spinlock_debug.c
index 4755b98..81fa789 100644
--- a/lib/spinlock_debug.c
+++ b/lib/spinlock_debug.c
@@ -14,14 +14,16 @@
#include <linux/module.h>

void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ const char *file, unsigned int line)
{
#ifdef CONFIG_DEBUG_LOCK_ALLOC
/*
* Make sure we are not reinitializing a held lock:
*/
debug_check_no_locks_freed((void *)lock, sizeof(*lock));
- lockdep_init_map(&lock->dep_map, name, key, 0);
+ __lockdep_init_map(&lock->dep_map, name, key, 0,
+ file, line);
#endif
lock->raw_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
lock->magic = SPINLOCK_MAGIC;
--
1.6.5.2

2010-01-22 16:43:51

by Jason Baron

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Fri, Jan 22, 2010 at 10:39:01PM +0900, Hitoshi Mitake wrote:
> Adding new subcommand "perf lock" to perf.
>

looks like a really intresting patch! Does this patch mean that we can
potentially remove CONFIG_LOCK_STAT, now that the functionality is
available via 'perf'? Another desire for me has been to able to run
lockstat on production box. This has not been possible before without
incurring the performance penalty of lockdep. however, with 'perf' tools
ability to be turned on an off, perhaps there is (or we can build) a
mode for lockdep that is lower overhead for this usecase?

thanks,

-Jason

2010-01-22 16:49:00

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Fri, 2010-01-22 at 11:42 -0500, Jason Baron wrote:
> On Fri, Jan 22, 2010 at 10:39:01PM +0900, Hitoshi Mitake wrote:
> > Adding new subcommand "perf lock" to perf.
> >
>
> looks like a really intresting patch! Does this patch mean that we can
> potentially remove CONFIG_LOCK_STAT, now that the functionality is
> available via 'perf'? Another desire for me has been to able to run
> lockstat on production box. This has not been possible before without
> incurring the performance penalty of lockdep. however, with 'perf' tools
> ability to be turned on an off, perhaps there is (or we can build) a
> mode for lockdep that is lower overhead for this usecase?

boot with: prove_locking=0

This patch still needs all the hooks and bits from lockdep, no way you
want all that in a prodction kernel.

2010-01-22 16:53:13

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Fri, 2010-01-22 at 17:48 +0100, Peter Zijlstra wrote:
> On Fri, 2010-01-22 at 11:42 -0500, Jason Baron wrote:
> > On Fri, Jan 22, 2010 at 10:39:01PM +0900, Hitoshi Mitake wrote:
> > > Adding new subcommand "perf lock" to perf.
> > >
> >
> > looks like a really intresting patch! Does this patch mean that we can
> > potentially remove CONFIG_LOCK_STAT, now that the functionality is
> > available via 'perf'? Another desire for me has been to able to run
> > lockstat on production box. This has not been possible before without
> > incurring the performance penalty of lockdep. however, with 'perf' tools
> > ability to be turned on an off, perhaps there is (or we can build) a
> > mode for lockdep that is lower overhead for this usecase?
>
> boot with: prove_locking=0
>
> This patch still needs all the hooks and bits from lockdep, no way you
> want all that in a prodction kernel.

Also, I suspect lockstat to still be lots faster, since it doesn't need
to transfer gobs of data but builds the histogram in-kernel.

Having the ability to do in-kernel histrograms with perf is something
I've thought about but not yet managed to think of a nice way of doing
it.

2010-01-23 06:57:55

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics


* Peter Zijlstra <[email protected]> wrote:

> On Fri, 2010-01-22 at 17:48 +0100, Peter Zijlstra wrote:
> > On Fri, 2010-01-22 at 11:42 -0500, Jason Baron wrote:
> > > On Fri, Jan 22, 2010 at 10:39:01PM +0900, Hitoshi Mitake wrote:
> > > > Adding new subcommand "perf lock" to perf.
> > > >
> > >
> > > looks like a really intresting patch! Does this patch mean that we can
> > > potentially remove CONFIG_LOCK_STAT, now that the functionality is
> > > available via 'perf'? Another desire for me has been to able to run
> > > lockstat on production box. This has not been possible before without
> > > incurring the performance penalty of lockdep. however, with 'perf' tools
> > > ability to be turned on an off, perhaps there is (or we can build) a
> > > mode for lockdep that is lower overhead for this usecase?
> >
> > boot with: prove_locking=0
> >
> > This patch still needs all the hooks and bits from lockdep, no way you
> > want all that in a prodction kernel.
>
> Also, I suspect lockstat to still be lots faster, since it doesn't need to
> transfer gobs of data but builds the histogram in-kernel.
>
> Having the ability to do in-kernel histrograms with perf is something I've
> thought about but not yet managed to think of a nice way of doing it.

Yeah, that has come up in the past and it would be excellent to have. That
would allow the removal of the special-purpose lockstat /proc output.

Ingo

2010-01-25 02:24:23

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [PATCH 12/12] Revert "perf record: Intercept all events"

Em Fri, Jan 22, 2010 at 10:39:13PM +0900, Hitoshi Mitake escreveu:
> This reverts commit f5a2c3dce03621b55f84496f58adc2d1a87ca16f.
>
> This patch is required for making "perf lock rec" work.
> The commit f5a2c3dce0 changes write_event() of builtin-record.c .
> And changed write_event() sometimes doesn't stop with perf lock rec.
>
> I'm researching what makes write_event() loop infinity,
> and noticed that there are some events with event_t->header.size == 0.
> But the detail of the problem,
> like kind of these events, isn't clear...
>
> If you know something related to this problem,
> could you tell me, Arnaldo?

Well, this will have to wait for somebody to remove the need for
intercepting those events, reverting this patch fixes your tool while
breaking others that then won't catch all the events.

I'll get 'perf regtest' out with some initial tests then try to get some
proposal for injecting the buildid, if found in a DSO, via
PERF_RECORD_MMAP, lets see how this goes...

Best Regards,

- Arnaldo

BTW: I took longer to send a response to this question addressed to me
because I wasn't on the CC list :-)

2010-01-25 14:23:35

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

Hi Jason, thanks for your reply!

On Sat, Jan 23, 2010 at 01:42, Jason Baron <[email protected]> wrote:
> On Fri, Jan 22, 2010 at 10:39:01PM +0900, Hitoshi Mitake wrote:
>> Adding new subcommand "perf lock" to perf.
>>
>
> looks like a really intresting patch! Does this patch mean that we can
> potentially remove CONFIG_LOCK_STAT, now that the functionality is
> available via 'perf'? Another desire for me has been to able to run
> lockstat on production box. This has not been possible before without
> incurring the performance penalty of lockdep. however, with 'perf' tools
> ability to be turned on an off, perhaps there is (or we can build) a
> mode for lockdep that is lower overhead for this usecase?
>

As Peter told, this patch series depends on lockdep.
But, checking dependency and tracing lock events are
essentially different things. So I think dividing these is possible.

But, perf depends on debugfs.
I don't know about production box well,
does your production box turns on debugfs?
It seems that debugfs is not a thing for production box...

And I'm interesting in in-kernel histogram of locks.
Is there requirement of it?
I have some ideas (not done yet, completely).

Thanks,
Hitoshi

2010-01-25 14:47:42

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH 12/12] Revert "perf record: Intercept all events"

On Mon, Jan 25, 2010 at 11:23, Arnaldo Carvalho de Melo
<[email protected]> wrote:
> Em Fri, Jan 22, 2010 at 10:39:13PM +0900, Hitoshi Mitake escreveu:
>> This reverts commit f5a2c3dce03621b55f84496f58adc2d1a87ca16f.
>>
>> This patch is required for making "perf lock rec" work.
>> The commit f5a2c3dce0 changes write_event() of builtin-record.c .
>> And changed write_event() sometimes doesn't stop with perf lock rec.
>>
>> I'm researching what makes write_event() loop infinity,
>> and noticed that there are some events with event_t->header.size == 0.
>> But the detail of the problem,
>> like kind of these events, isn't clear...
>>
>> If you know something related to this problem,
>> could you tell me, Arnaldo?
>
> Well, this will have to wait for somebody to remove the need for
> intercepting those events, reverting this patch fixes your tool while
> breaking others that then won't catch all the events.

Yes, this patch is too egoistic thing and temporary solution.
I have to consider and modify 'perf lock'.

>
> I'll get 'perf regtest' out with some initial tests then try to get some
> proposal for injecting the buildid, if found in a DSO, via
> PERF_RECORD_MMAP, lets see how this goes...

What does "DSO" mean? Sorry, I'm not good at English...

>
> Best Regards,
>
> - Arnaldo
>
> BTW: I took longer to send a response to this question addressed to me
> because I wasn't on the CC list :-)
>
>

Oh, sorry... I wonder why I didn't add you to Cc or To :(
It is completely my mistake, and thanks for your reply!

Thanks
Hitoshi

2010-01-25 14:48:23

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Mon, 2010-01-25 at 23:15 +0900, Hitoshi Mitake wrote:
>
> As Peter told, this patch series depends on lockdep.
> But, checking dependency and tracing lock events are
> essentially different things. So I think dividing these is possible.

Still you need quite a lot of instrumentation at the actual lock sites,
not something you'd want to have on your production machine.

> But, perf depends on debugfs.
> I don't know about production box well,
> does your production box turns on debugfs?
> It seems that debugfs is not a thing for production box...

debugfs isn't a problem to have.

> And I'm interesting in in-kernel histogram of locks.
> Is there requirement of it?
> I have some ideas (not done yet, completely).

No real concrete ideas yet.. One possibility is to add a range,
bucket_size tuple to perf_event_attr and store the histrogram in the
mmap() area instead of samples.

One difficulty is how to specify what to make that histogram over.

Anyway, its all open still..

2010-01-25 14:50:29

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 12/12] Revert "perf record: Intercept all events"

On Mon, 2010-01-25 at 23:47 +0900, Hitoshi Mitake wrote:
>
> What does "DSO" mean? Sorry, I'm not good at English...

Dynamic Shared Object, basically dynamic executables and .so files.

2010-01-25 14:58:00

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [PATCH 12/12] Revert "perf record: Intercept all events"

Em Mon, Jan 25, 2010 at 11:47:40PM +0900, Hitoshi Mitake escreveu:
> On Mon, Jan 25, 2010 at 11:23, Arnaldo Carvalho de Melo
> <[email protected]> wrote:
> > Em Fri, Jan 22, 2010 at 10:39:13PM +0900, Hitoshi Mitake escreveu:
> >> This reverts commit f5a2c3dce03621b55f84496f58adc2d1a87ca16f.
> >>
> >> This patch is required for making "perf lock rec" work.
> >> The commit f5a2c3dce0 changes write_event() of builtin-record.c .
> >> And changed write_event() sometimes doesn't stop with perf lock rec.
> >>
> >> I'm researching what makes write_event() loop infinity,
> >> and noticed that there are some events with event_t->header.size == 0.
> >> But the detail of the problem,
> >> like kind of these events, isn't clear...
> >>
> >> If you know something related to this problem,
> >> could you tell me, Arnaldo?
> >
> > Well, this will have to wait for somebody to remove the need for
> > intercepting those events, reverting this patch fixes your tool while
> > breaking others that then won't catch all the events.
>
> Yes, this patch is too egoistic thing and temporary solution.
> I have to consider and modify 'perf lock'.

Hey, don't get me wrong, the situation is fragile, either way something
will get broken and that isn't your fault, its just that we need some
sensible and non racy way to inject the buildids at 'perf record' time.

The way I did it, long ago, intercepting events in 'perf record' to
build a DSO list to then at 'perf record' exit to insert a table at the
perf.data file header looks too intrusive now, so we need some other way
that doesn't have this problem and its not racy.

> > I'll get 'perf regtest' out with some initial tests then try to get some
> > proposal for injecting the buildid, if found in a DSO, via
> > PERF_RECORD_MMAP, lets see how this goes...
>
> What does "DSO" mean? Sorry, I'm not good at English...

As Peter said, anything that that is on an executable MMAP.

> >
> > Best Regards,
> >
> > - Arnaldo
> >
> > BTW: I took longer to send a response to this question addressed to me
> > because I wasn't on the CC list :-)
>
> Oh, sorry... I wonder why I didn't add you to Cc or To :(
> It is completely my mistake, and thanks for your reply!

:-) Best Regards,

- Arnaldo

2010-01-26 04:28:07

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Mon, Jan 25, 2010 at 11:15:33PM +0900, Hitoshi Mitake wrote:
> But, perf depends on debugfs.
> I don't know about production box well,
> does your production box turns on debugfs?
> It seems that debugfs is not a thing for production box...

Not true, it is enabled in many "enterprise" distros these days.

thanks,

greg k-h

2010-01-26 07:04:43

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Mon, Jan 25, 2010 at 23:48, Peter Zijlstra <[email protected]> wrote:
> On Mon, 2010-01-25 at 23:15 +0900, Hitoshi Mitake wrote:
>>
>> As Peter told, this patch series depends on lockdep.
>> But, checking dependency and tracing lock events are
>> essentially different things. So I think dividing these is possible.
>
> Still you need quite a lot of instrumentation at the actual lock sites,
> not something you'd want to have on your production machine.
>
>> But, perf depends on debugfs.
>> I don't know about production box well,
>> does your production box turns on debugfs?
>> It seems that debugfs is not a thing for production box...
>
> debugfs isn't a problem to have.

I didn't know that, thanks Peter and Greg!

>
>> And I'm interesting in in-kernel histogram of locks.
>> Is there requirement of it?
>> I have some ideas (not done yet, completely).
>
> No real concrete ideas yet.. One possibility is to add a range,
> bucket_size tuple to perf_event_attr and store the histrogram in the
> mmap() area instead of samples.
>
> One difficulty is how to specify what to make that histogram over.
>
> Anyway, its all open still..
>
>
>

I think my previous patch will helpful for this purpose,
http://lkml.org/lkml/2009/12/13/16
# This patch is too dangerous and too incomplete, please don't use!

I think distributing numerical IDs for lock instances makes things easily.
Distributing IDs means that kernel can know where the lock instances are.
Updating information of ID should be triggered by:

1) booting
Lock defined by DEFINE_SPINLOCK(), DEFINE_RWLOCK(), etc can be
corrected into specific section and initialized.
Like the way of event_trace_init() in kernel/trace/trace_events.c

2) loading/unloading modules
Lock defined by DEFINE_SPINLOCK(), DEFINE_RWLOCK(), etc can be in .ko files.
So hooking module loading or unloading will be required.

3) calling spin_lock_init(), rwlock_init(), etc...

It seems that difficulty of drawing histogram came from the running kernel's
lack of knowledge about where the lock instances are, isn't it?

Thanks,
Hitoshi

2010-01-26 07:23:53

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH 12/12] Revert "perf record: Intercept all events"

On Mon, Jan 25, 2010 at 23:57, Arnaldo Carvalho de Melo
<[email protected]> wrote:
> Em Mon, Jan 25, 2010 at 11:47:40PM +0900, Hitoshi Mitake escreveu:
>> On Mon, Jan 25, 2010 at 11:23, Arnaldo Carvalho de Melo
>> <[email protected]> wrote:
>> > Em Fri, Jan 22, 2010 at 10:39:13PM +0900, Hitoshi Mitake escreveu:
>> >> This reverts commit f5a2c3dce03621b55f84496f58adc2d1a87ca16f.
>> >>
>> >> This patch is required for making "perf lock rec" work.
>> >> The commit f5a2c3dce0 changes write_event() of builtin-record.c .
>> >> And changed write_event() sometimes doesn't stop with perf lock rec.
>> >>
>> >> I'm researching what makes write_event() loop infinity,
>> >> and noticed that there are some events with event_t->header.size == 0.
>> >> But the detail of the problem,
>> >> like kind of these events, isn't clear...
>> >>
>> >> If you know something related to this problem,
>> >> could you tell me, Arnaldo?
>> >
>> > Well, this will have to wait for somebody to remove the need for
>> > intercepting those events, reverting this patch fixes your tool while
>> > breaking others that then won't catch all the events.
>>
>> Yes, this patch is too egoistic thing and temporary solution.
>> I have to consider and modify 'perf lock'.
>
> Hey, don't get me wrong, the situation is fragile, either way something
> will get broken and that isn't your fault, its just that we need some
> sensible and non racy way to inject the buildids at 'perf record' time.

Thanks for correcting my misunderstand :)

>
> The way I did it, long ago, intercepting events in 'perf record' to
> build a DSO list to then at 'perf record' exit to insert a table at the
> perf.data file header looks too intrusive now, so we need some other way
> that doesn't have this problem and its not racy.

I see. I'll look some other way, too.
Or if you find it, could you tell me?

>
>> > I'll get 'perf regtest' out with some initial tests then try to get some
>> > proposal for injecting the buildid, if found in a DSO, via
>> > PERF_RECORD_MMAP, lets see how this goes...
>>
>> What does "DSO" mean? Sorry, I'm not good at English...
>
> As Peter said, anything that that is on an executable MMAP.

I understood :) Thanks, Peter and Arnaldo.


Thanks,
Hitoshi

2010-01-28 06:51:30

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH 11/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On 2010年01月22日 22:39, Hitoshi Mitake wrote:

Hi Ingo,
I have a trivial but difficult problem on this 11th patch.

> +static void
> +process_lock_acquire_event(void *data,
> + struct event *event __used,
> + int cpu __used,
> + u64 timestamp __used,
> + struct thread *thread __used)
> +{
> + struct trace_acquire_event acquire_event;
> + u64 tmp; /* this is required for casting... */
> +
> + tmp = raw_field_value(event, "lockdep_addr", data);
> + acquire_event.addr = (void *)tmp; # <- type of addr is void *

In the above function,
the statement:
raw_field_value(event, "lockdep_addr", data);
reads member of event which is type of void *.

But, direct assigning like
acquire_event.addr = (void *)raw_field_value(event, "lockdep_addr",
data);
produces warning and it is treated as error on build environment of perf.
GCC says "cast from function call of type ‘long long unsigned int’ to
non-matching type ‘void *’".
# Why is tmp to acquire_event.addr allowed? It's mysterious...

So I declared "u64 tmp" as temporary place for casting.
But next, this tmp causes compile error on x86_32.
Because it is u64. tmp should be u32 on x86_32.

I have to switch type of tmp u64 or u32 based on environment.
Don't you know some good preprocessor flag or some other avoiding methods?
I seeked but not found...

Thanks,
Hitoshi

2010-01-28 14:29:04

by Hitoshi Mitake

[permalink] [raw]
Subject: [PATCH] perf: Add util/include/linux/config.h for including generated autoconf.h of kernel

I'd like to add util/include/linux/config.h,
this header file includes autoconf.h which is generated
by build phase of kernel.

This makes building of perf depend on building of kernel.
It's a little dirty. But perf has tight relation
of its kernel, so I don't think that it is a serious problem.

My original purpose of making this patch is
knowing size of void *.
If config.h is available,
CONFIG_ARCH_PHYS_ADDR_T_64BIT will be provided for source of perf,
we can know size of void * by this config.

How do you think?

Signed-off-by: Hitoshi Mitake <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
---
tools/perf/Makefile | 1 +
tools/perf/util/include/linux/config.h | 6 ++++++
2 files changed, 7 insertions(+), 0 deletions(-)
create mode 100644 tools/perf/util/include/linux/config.h

diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index 9b173e6..54c7e87 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -334,6 +334,7 @@ LIB_H += ../../include/linux/perf_event.h
LIB_H += ../../include/linux/rbtree.h
LIB_H += ../../include/linux/list.h
LIB_H += ../../include/linux/stringify.h
+LIB_H += ../../include/generated/autoconf.h
LIB_H += util/include/linux/bitmap.h
LIB_H += util/include/linux/bitops.h
LIB_H += util/include/linux/compiler.h
diff --git a/tools/perf/util/include/linux/config.h b/tools/perf/util/include/linux/config.h
new file mode 100644
index 0000000..e3f2976
--- /dev/null
+++ b/tools/perf/util/include/linux/config.h
@@ -0,0 +1,6 @@
+#ifndef PERF_CONFIG_H
+#define PERF_CONFIG_H
+
+#include "../../../../include/generated/autoconf.h"
+
+#endif
--
1.6.5.2

2010-01-28 14:44:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf: Add util/include/linux/config.h for including generated autoconf.h of kernel

On Thu, 2010-01-28 at 23:28 +0900, Hitoshi Mitake wrote:
> I'd like to add util/include/linux/config.h,
> this header file includes autoconf.h which is generated
> by build phase of kernel.
>
> This makes building of perf depend on building of kernel.
> It's a little dirty. But perf has tight relation
> of its kernel, so I don't think that it is a serious problem.
>
> My original purpose of making this patch is
> knowing size of void *.
> If config.h is available,
> CONFIG_ARCH_PHYS_ADDR_T_64BIT will be provided for source of perf,
> we can know size of void * by this config.
>
> How do you think?

I think sizeof(void *).

2010-01-28 22:27:31

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH] perf: Add util/include/linux/config.h for including generated autoconf.h of kernel

On Thu, Jan 28, 2010 at 11:28:48PM +0900, Hitoshi Mitake wrote:

> I'd like to add util/include/linux/config.h,
> this header file includes autoconf.h which is generated
> by build phase of kernel.
>
> This makes building of perf depend on building of kernel.
> It's a little dirty. But perf has tight relation
> of its kernel, so I don't think that it is a serious problem.
>
> My original purpose of making this patch is
> knowing size of void *.
> If config.h is available,
> CONFIG_ARCH_PHYS_ADDR_T_64BIT will be provided for source of perf,
> we can know size of void * by this config.
>
> How do you think?

How will this work if I built the kernel in a separate object
directory (e.g., using "make O=../builddir vmlinux")? I think that
trying to include generated files will fail in that case.

In general I think that tying perf to the kernel config is a bad idea.

Paul.

2010-01-29 14:34:08

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Fri, Jan 22 2010, Hitoshi Mitake wrote:
> Adding new subcommand "perf lock" to perf.
>
> I made this patch series on
> latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
> so please apply this series to perf/core.

[snip]

I wanted to give this a go today, since I think it's pretty nifty and a
lot better than using /proc/lock_stat. But it basically spirals the
system into death [1]. How big a system did you test this on?

[1] Got this: [ 117.097918] hrtimer: interrupt took 35093901 ns

--
Jens Axboe

2010-01-30 11:45:24

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf: Add util/include/linux/config.h for including generated autoconf.h of kernel

(2010年01月28日 23:44), Peter Zijlstra wrote:
> On Thu, 2010-01-28 at 23:28 +0900, Hitoshi Mitake wrote:
>
>> I'd like to add util/include/linux/config.h,
>> this header file includes autoconf.h which is generated
>> by build phase of kernel.
>>
>> This makes building of perf depend on building of kernel.
>> It's a little dirty. But perf has tight relation
>> of its kernel, so I don't think that it is a serious problem.
>>
>> My original purpose of making this patch is
>> knowing size of void *.
>> If config.h is available,
>> CONFIG_ARCH_PHYS_ADDR_T_64BIT will be provided for source of perf,
>> we can know size of void * by this config.
>>
>> How do you think?
>>
> I think sizeof(void *).
>
>
>

I've told about preprocessor level method.
But thanks for your advice,
I found the way to use memcpy()!

Thanks
Hitoshi

2010-01-30 11:51:21

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH] perf: Add util/include/linux/config.h for including generated autoconf.h of kernel

(2010年01月29日 07:27), Paul Mackerras wrote:
> On Thu, Jan 28, 2010 at 11:28:48PM +0900, Hitoshi Mitake wrote:
>
>
>> I'd like to add util/include/linux/config.h,
>> this header file includes autoconf.h which is generated
>> by build phase of kernel.
>>
>> This makes building of perf depend on building of kernel.
>> It's a little dirty. But perf has tight relation
>> of its kernel, so I don't think that it is a serious problem.
>>
>> My original purpose of making this patch is
>> knowing size of void *.
>> If config.h is available,
>> CONFIG_ARCH_PHYS_ADDR_T_64BIT will be provided for source of perf,
>> we can know size of void * by this config.
>>
>> How do you think?
>>
> How will this work if I built the kernel in a separate object
> directory (e.g., using "make O=../builddir vmlinux")? I think that
> trying to include generated files will fail in that case.
>
> In general I think that tying perf to the kernel config is a bad idea.
>
> Paul.
>
>

Ah, sorry, I've forgot about the case of
separated build directory.

And I found the way to use memcpy() and sent patch series.
I sent the patch series, based on the memcpy() way.

Thanks
Hitoshi

2010-01-30 12:00:17

by Hitoshi Mitake

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

(2010年01月29日 23:34), Jens Axboe wrote:
> On Fri, Jan 22 2010, Hitoshi Mitake wrote:
>
>> Adding new subcommand "perf lock" to perf.
>>
>> I made this patch series on
>> latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
>> so please apply this series to perf/core.
>>
> [snip]
>
> I wanted to give this a go today, since I think it's pretty nifty and a
> lot better than using /proc/lock_stat. But it basically spirals the
> system into death [1]. How big a system did you test this on?
>
> [1] Got this: [ 117.097918] hrtimer: interrupt took 35093901 ns
>
>

I tested this on Core i7 965 + 3GB DRAM machine.
Test program is mainly "perf bench sched messaging".

Could you tell me the detail of your test situation?

Thanks,
Hitoshi

2010-01-31 20:44:12

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Sat, Jan 30 2010, Hitoshi Mitake wrote:
> (2010???01???29??? 23:34), Jens Axboe wrote:
>> On Fri, Jan 22 2010, Hitoshi Mitake wrote:
>>
>>> Adding new subcommand "perf lock" to perf.
>>>
>>> I made this patch series on
>>> latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
>>> so please apply this series to perf/core.
>>>
>> [snip]
>>
>> I wanted to give this a go today, since I think it's pretty nifty and a
>> lot better than using /proc/lock_stat. But it basically spirals the
>> system into death [1]. How big a system did you test this on?
>>
>> [1] Got this: [ 117.097918] hrtimer: interrupt took 35093901 ns
>>
>>
>
> I tested this on Core i7 965 + 3GB DRAM machine.
> Test program is mainly "perf bench sched messaging".
>
> Could you tell me the detail of your test situation?

I tried to run it on a 64 thread box, on a fio job that was driving 80
disks. It was just a quick test, but after ~20 seconds it had not even
gotten started yet, it was still stuck in setting up the jobs and
traversing sysfs for finding disk stats, etc. I can try something
lighter to see if it's the cpu count or the tough job that was making it
spiral into (near) death.

--
Jens Axboe

2010-01-31 21:07:14

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Sun, Jan 31, 2010 at 09:44:08PM +0100, Jens Axboe wrote:
> On Sat, Jan 30 2010, Hitoshi Mitake wrote:
> > (2010???01???29??? 23:34), Jens Axboe wrote:
> >> On Fri, Jan 22 2010, Hitoshi Mitake wrote:
> >>
> >>> Adding new subcommand "perf lock" to perf.
> >>>
> >>> I made this patch series on
> >>> latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
> >>> so please apply this series to perf/core.
> >>>
> >> [snip]
> >>
> >> I wanted to give this a go today, since I think it's pretty nifty and a
> >> lot better than using /proc/lock_stat. But it basically spirals the
> >> system into death [1]. How big a system did you test this on?
> >>
> >> [1] Got this: [ 117.097918] hrtimer: interrupt took 35093901 ns
> >>
> >>
> >
> > I tested this on Core i7 965 + 3GB DRAM machine.
> > Test program is mainly "perf bench sched messaging".
> >
> > Could you tell me the detail of your test situation?
>
> I tried to run it on a 64 thread box, on a fio job that was driving 80
> disks. It was just a quick test, but after ~20 seconds it had not even
> gotten started yet, it was still stuck in setting up the jobs and
> traversing sysfs for finding disk stats, etc. I can try something
> lighter to see if it's the cpu count or the tough job that was making it
> spiral into (near) death.


So you're experiencing a severe slowdown but it's not a lockup, right?

I think there are various reasons for that.

- The first is that lock events are high freq events, probably the highest
frequency of all events we have, and this amount of course is going to
scale with the number of cpus.

- They are suboptimally implemented. We are copying various strings
in every lock events (acquire(d), release, contended) that could be
sent once in a lock_init event. Among these strings are the name of
the lock and the __FILE__ __LINE__ of its declaration.
Ingo has applied Hitoshi's patches in tip:/perf/core few hours ago
without the __FILE__ __LINE__ infos. We are thinking about other
ways to get these infos.
The name is going to move in a new lock_init event, I'm currently
working at it.

- We are using the -M option from perf tools which multiplexes every
event buffers, which means every cpu commit their lock events in
the same buffer. Couple that with the two above reasons, it's supposed
to scale at worst.

Given that and the fact you have a 64 threads box, I'm not very
surprised of such global slowdown.

But we are working toward this and will try to make it better in this
regard.

Thanks.

2010-01-31 21:11:09

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Sun, Jan 31 2010, Frederic Weisbecker wrote:
> On Sun, Jan 31, 2010 at 09:44:08PM +0100, Jens Axboe wrote:
> > On Sat, Jan 30 2010, Hitoshi Mitake wrote:
> > > (2010???01???29??? 23:34), Jens Axboe wrote:
> > >> On Fri, Jan 22 2010, Hitoshi Mitake wrote:
> > >>
> > >>> Adding new subcommand "perf lock" to perf.
> > >>>
> > >>> I made this patch series on
> > >>> latest perf/core of tip (ef12a141306c90336a3a10d40213ecd98624d274),
> > >>> so please apply this series to perf/core.
> > >>>
> > >> [snip]
> > >>
> > >> I wanted to give this a go today, since I think it's pretty nifty and a
> > >> lot better than using /proc/lock_stat. But it basically spirals the
> > >> system into death [1]. How big a system did you test this on?
> > >>
> > >> [1] Got this: [ 117.097918] hrtimer: interrupt took 35093901 ns
> > >>
> > >>
> > >
> > > I tested this on Core i7 965 + 3GB DRAM machine.
> > > Test program is mainly "perf bench sched messaging".
> > >
> > > Could you tell me the detail of your test situation?
> >
> > I tried to run it on a 64 thread box, on a fio job that was driving 80
> > disks. It was just a quick test, but after ~20 seconds it had not even
> > gotten started yet, it was still stuck in setting up the jobs and
> > traversing sysfs for finding disk stats, etc. I can try something
> > lighter to see if it's the cpu count or the tough job that was making it
> > spiral into (near) death.
>
>
> So you're experiencing a severe slowdown but it's not a lockup, right?

Right, I was eventually able to ctrl-z out of the test and kill the app.
So it wasn't hung, just completely live locked. Any attempt to login via
ssh didn't make any progress and even key presses took seconds.

> I think there are various reasons for that.
>
> - The first is that lock events are high freq events, probably the highest
> frequency of all events we have, and this amount of course is going to
> scale with the number of cpus.
>
> - They are suboptimally implemented. We are copying various strings
> in every lock events (acquire(d), release, contended) that could be
> sent once in a lock_init event. Among these strings are the name of
> the lock and the __FILE__ __LINE__ of its declaration.
> Ingo has applied Hitoshi's patches in tip:/perf/core few hours ago
> without the __FILE__ __LINE__ infos. We are thinking about other
> ways to get these infos.
> The name is going to move in a new lock_init event, I'm currently
> working at it.
>
> - We are using the -M option from perf tools which multiplexes every
> event buffers, which means every cpu commit their lock events in
> the same buffer. Couple that with the two above reasons, it's supposed
> to scale at worst.
>
> Given that and the fact you have a 64 threads box, I'm not very
> surprised of such global slowdown.
>
> But we are working toward this and will try to make it better in this
> regard.

OK, I'll stay tuned and watch for developments in this area. Thanks!

--
Jens Axboe

2010-02-01 09:15:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Sun, 2010-01-31 at 22:07 +0100, Frederic Weisbecker wrote:
>
> - We are using the -M option from perf tools which multiplexes every
> event buffers, which means every cpu commit their lock events in
> the same buffer. Couple that with the two above reasons, it's supposed
> to scale at worst.

Why are you doing that? That seems like asking for trouble..

2010-02-01 09:23:31

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics


* Peter Zijlstra <[email protected]> wrote:

> On Sun, 2010-01-31 at 22:07 +0100, Frederic Weisbecker wrote:
> >
> > - We are using the -M option from perf tools which multiplexes every
> > event buffers, which means every cpu commit their lock events in
> > the same buffer. Couple that with the two above reasons, it's supposed
> > to scale at worst.
>
> Why are you doing that? That seems like asking for trouble..

But as i said i already tried with -M off and it didnt cure the slowdown.

that -M was just copied over from perf sched (where it makes sense).

Ingo

2010-02-01 13:23:13

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Sun, Jan 31 2010, Jens Axboe wrote:
> > I tested this on Core i7 965 + 3GB DRAM machine.
> > Test program is mainly "perf bench sched messaging".
> >
> > Could you tell me the detail of your test situation?
>
> I tried to run it on a 64 thread box, on a fio job that was driving 80
> disks. It was just a quick test, but after ~20 seconds it had not even
> gotten started yet, it was still stuck in setting up the jobs and
> traversing sysfs for finding disk stats, etc. I can try something
> lighter to see if it's the cpu count or the tough job that was making it
> spiral into (near) death.

A simple 'perf lock rec ls' exhibits the same symptoms. It's been
running for a minute or so now, no output. Even local network pings take
500-1000ms.

--
Jens Axboe

2010-02-01 17:58:17

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 00/12] perf lock: New subcommand "perf lock", for analyzing lock statistics

On Mon, Feb 01, 2010 at 10:22:39AM +0100, Ingo Molnar wrote:
>
> * Peter Zijlstra <[email protected]> wrote:
>
> > On Sun, 2010-01-31 at 22:07 +0100, Frederic Weisbecker wrote:
> > >
> > > - We are using the -M option from perf tools which multiplexes every
> > > event buffers, which means every cpu commit their lock events in
> > > the same buffer. Couple that with the two above reasons, it's supposed
> > > to scale at worst.
> >
> > Why are you doing that? That seems like asking for trouble..
>
> But as i said i already tried with -M off and it didnt cure the slowdown.


Ah.. I've tested the lock events in an atom (two hardware threads) and there
is quite a slowdown too.

Anyway, the patches to remove the string copies from critical lock events
are soon to be ready now. I just suspect there are other problems somewhere
else that cause the slowdown.


>
> that -M was just copied over from perf sched (where it makes sense).
>
> Ingo



For now it is necessary, because we need the events to be ordered by time,
otherwise the state machine in perf lock would be broken.
We can certainly reorder the events in post-processing, but lock events
tend to grow the file _very_ quickly, so I fear this is going to require
lot of memory. I'll try to find a smart way to fix that.