2022-02-09 08:16:05

by Namhyung Kim

[permalink] [raw]
Subject: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

Hello,

There have been some requests for low-overhead kernel lock contention
monitoring. The kernel has CONFIG_LOCK_STAT to provide such an infra
either via /proc/lock_stat or tracepoints directly.

However it's not light-weight and hard to be used in production. So
I'm trying to separate out the tracepoints and using them as a base to
build a new monitoring system.

As the lockdep and lock_stat provide good hooks in the lock functions,
it'd be natural to reuse them. Actually I tried to use lockdep as is
but disables the functionality at runtime (initialize debug_locks = 0,
lock_stat = 0). But it still has unacceptable overhead and the
lockdep data structures also increase memory footprint unnecessarily.

So I'm proposing a separate tracepoint-only configuration and keeping
lockdep_map only with minimal information needed for tracepoints (for
now, it's just name). And then the existing lockdep hooks can be used
for the tracepoints.

The patch 01-06 are preparation for the work. In a few places in the
kernel, they calls lockdep annotation explicitly to deal with
limitations in the lockdep implementation. In my understanding, they
are not needed to analyze lock contention.

To make matters worse, they rely on the compiler optimization (or
macro magic) so that it can get rid of the annotations and their
arguments when lockdep is not configured.

But it's not true any more when lock tracepoints are added and it'd
cause build errors. So I added #ifdef guards for LOCKDEP in the code
to prevent such errors.

In the patch 07 I mechanically changed most of code that depend on
CONFIG_LOCKDEP or CONFIG_DEBUG_LOCK_ALLOC to CONFIG_LOCK_INFO. It
paves the way for the codes to be shared for lockdep and tracepoints.
Mostly, it makes sure that locks are initialized with a proper name,
like in the patch 08 and 09.

I didn't change some places intentionally - for example, timer and
workqueue depend on LOCKDEP explicitly since they use some lockdep
annotations to work around the "held lock freed" warnings. The ocfs2
directly accesses lockdep_map.key so I didn't touch the code for now.
And RCU was because it generates too much overhead thanks to the
rcu_read_lock(). Maybe I need to revisit some of them later.

I added CONFIG_LOCK_TRACEPOINTS in the patch 10 to make it optional.
I found that it adds 2~3% of overhead when I ran `perf bench sched
messaging` even when the tracepoints are disabled. The benchmark
creates a lot of processes and make them communicate by socket pairs
(or pipes). I measured that around 15% of lock acquisition creates
contentions and it's mostly for spin locks (alc->lock and u->lock).

I ran perf record + report with the workload and it showed 50% of the
cpu cycles are in the spin lock slow path. So it's highly affected by
the spin lock slow path. Actually LOCK_CONTENDED() macro transforms
the spin lock code (and others) to use trylock first and then fall
back to real lock function if failed. Thus it'd add more (atomic)
operations and cache line bouncing for the contended cases:

#define LOCK_CONTENDED(_lock, try, lock) \
do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
lock(_lock); \
} \
lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} while (0)

If I modify the macro not to use trylock and to call the real lock
function directly (so the lock_contended tracepoint would be called
always, if enabled), the overhead goes down to 0.x% when the
tracepoints are disabled.

I don't have a good solution as long as we use LOCK_CONTENDED() macro
to separate the contended locking path. Maybe we can make it call
some (generic) slow path lock function directly after failing trylock.
Or move the lockdep annotations into the each lock function bodies and
get rid of the LOCK_CONTENDED() macro entirely. Ideas?

Actually the patch 11 handles the same issue on the mutex code. The
fast version of mutex trylock was attempted only if LOCKDEP is not
enabled and it affects the mutex lock performance in the uncontended
cases too. So I partially reverted the change in the patch 07 to use
the fast functions with lock tracepoints too. Maybe we can use it
with LOCKDEP as well?

The last patch 12 might be controversial and I'd like to move the
lock_acquired annotation into the if(!try) block in the LOCK_CONTEDED
macro so that it can only be called when there's a contention.

Eventually I'm mostly interested in the contended locks only and I
want to reduce the overhead in the fast path. By moving that, it'd be
easy to track contended locks with timing by using two tracepoints.

It'd change lock hold time calculation in lock_stat for the fast path,
but I assume time difference between lock_acquire and lock_acquired
would be small when the lock is not contended. So I think we can use
the timestamp in lock_acquire. If it's not acceptable, we might
consider adding a new tracepoint to track the timing of contended
locks.

This series base on the current tip/locking/core and you get it from
'locking/tracepoint-v1' branch in my tree at:

git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git


Thanks,
Namhyung


Cc: Thomas Gleixner <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Byungchul Park <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]


Namhyung Kim (12):
locking: Pass correct outer wait type info
cgroup: rstat: Make cgroup_rstat_cpu_lock name readable
timer: Protect lockdep functions with #ifdef
workqueue: Protect lockdep functions with #ifdef
drm/i915: Protect lockdep functions with #ifdef
btrfs: change lockdep class size check using ks->names
locking: Introduce CONFIG_LOCK_INFO
locking/mutex: Init name properly w/ CONFIG_LOCK_INFO
locking: Add more static lockdep init macros
locking: Add CONFIG_LOCK_TRACEPOINTS option
locking/mutex: Revive fast functions for LOCK_TRACEPOINTS
locking: Move lock_acquired() from the fast path

drivers/gpu/drm/drm_connector.c | 7 +-
drivers/gpu/drm/i915/i915_sw_fence.h | 2 +-
drivers/gpu/drm/i915/intel_wakeref.c | 3 +
drivers/gpu/drm/i915/selftests/lib_sw_fence.h | 2 +-
.../net/wireless/intel/iwlwifi/iwl-trans.c | 4 +-
.../net/wireless/intel/iwlwifi/iwl-trans.h | 2 +-
drivers/tty/tty_ldsem.c | 2 +-
fs/btrfs/disk-io.c | 4 +-
fs/btrfs/disk-io.h | 2 +-
fs/cifs/connect.c | 2 +-
fs/kernfs/file.c | 2 +-
include/linux/completion.h | 2 +-
include/linux/jbd2.h | 2 +-
include/linux/kernfs.h | 2 +-
include/linux/kthread.h | 2 +-
include/linux/local_lock_internal.h | 18 +-
include/linux/lockdep.h | 170 ++++++++++++++++--
include/linux/lockdep_types.h | 8 +-
include/linux/mmu_notifier.h | 2 +-
include/linux/mutex.h | 12 +-
include/linux/percpu-rwsem.h | 4 +-
include/linux/regmap.h | 4 +-
include/linux/rtmutex.h | 14 +-
include/linux/rwlock_api_smp.h | 4 +-
include/linux/rwlock_rt.h | 4 +-
include/linux/rwlock_types.h | 11 +-
include/linux/rwsem.h | 14 +-
include/linux/seqlock.h | 4 +-
include/linux/spinlock_api_smp.h | 4 +-
include/linux/spinlock_rt.h | 4 +-
include/linux/spinlock_types.h | 4 +-
include/linux/spinlock_types_raw.h | 28 ++-
include/linux/swait.h | 2 +-
include/linux/tty_ldisc.h | 2 +-
include/linux/wait.h | 2 +-
include/linux/ww_mutex.h | 6 +-
include/media/v4l2-ctrls.h | 2 +-
include/net/sock.h | 2 +-
include/trace/events/lock.h | 4 +-
kernel/cgroup/rstat.c | 7 +-
kernel/locking/Makefile | 1 +
kernel/locking/lockdep.c | 40 ++++-
kernel/locking/mutex-debug.c | 2 +-
kernel/locking/mutex.c | 22 ++-
kernel/locking/mutex.h | 7 +
kernel/locking/percpu-rwsem.c | 2 +-
kernel/locking/rtmutex_api.c | 10 +-
kernel/locking/rwsem.c | 4 +-
kernel/locking/spinlock.c | 2 +-
kernel/locking/spinlock_debug.c | 4 +-
kernel/locking/spinlock_rt.c | 8 +-
kernel/locking/ww_rt_mutex.c | 2 +-
kernel/printk/printk.c | 14 +-
kernel/rcu/update.c | 27 +--
kernel/time/timer.c | 8 +-
kernel/workqueue.c | 13 ++
lib/Kconfig.debug | 14 ++
mm/memcontrol.c | 7 +-
mm/mmu_notifier.c | 2 +-
net/core/dev.c | 2 +-
net/sunrpc/svcsock.c | 2 +-
net/sunrpc/xprtsock.c | 2 +-
62 files changed, 391 insertions(+), 180 deletions(-)


base-commit: 1dc01abad6544cb9d884071b626b706e37aa9601
--
2.35.0.263.gb82422642f-goog



2022-02-09 08:45:07

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 09/12] locking: Add more static lockdep init macros

Add STATIC_LOCKDEP_MAP_INIT_{WAIT,TYPE} macros and use it for various
lock init codes. This helps having different implementations of
CONFIG_LOCK_INFO like lockdep and tracepoints.

Signed-off-by: Namhyung Kim <[email protected]>
---
drivers/gpu/drm/drm_connector.c | 5 ++---
include/linux/local_lock_internal.h | 10 ++++------
include/linux/lockdep.h | 23 +++++++++++++++++------
include/linux/mutex.h | 6 ++----
include/linux/rtmutex.h | 8 +++-----
include/linux/rwlock_types.h | 5 +----
include/linux/rwsem.h | 8 +++-----
include/linux/spinlock_types_raw.h | 24 ++++++++----------------
kernel/printk/printk.c | 10 ++++------
kernel/rcu/update.c | 27 +++++++++------------------
mm/memcontrol.c | 5 ++---
11 files changed, 55 insertions(+), 76 deletions(-)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 94931b32a491..7f470de2ef2b 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -677,9 +677,8 @@ const char *drm_get_connector_force_name(enum drm_connector_force force)
}

#ifdef CONFIG_LOCK_INFO
-static struct lockdep_map connector_list_iter_dep_map = {
- .name = "drm_connector_list_iter"
-};
+static struct lockdep_map connector_list_iter_dep_map =
+ STATIC_LOCKDEP_MAP_INIT("drm_connector_list_iter", NULL);
#endif

/**
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index 56f03f588aa7..dd280fcfadec 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -16,12 +16,10 @@ typedef struct {
} local_lock_t;

#ifdef CONFIG_LOCK_INFO
-# define LOCAL_LOCK_DEBUG_INIT(lockname) \
- .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_CONFIG, \
- .lock_type = LD_LOCK_PERCPU, \
- }, \
+# define LOCAL_LOCK_DEBUG_INIT(lockname) \
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_TYPE(#lockname, NULL, \
+ LD_WAIT_CONFIG, LD_WAIT_INV, \
+ LD_LOCK_PERCPU), \
.owner = NULL,

static inline void local_lock_acquire(local_lock_t *l)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 0cc2b338a006..38cbef7601c7 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -237,6 +237,19 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
#define lockdep_set_novalidate_class(lock) \
lockdep_set_class_and_name(lock, &__lockdep_no_validate__, #lock)

+/*
+ * To initialize a lockdep_map statically use this macro.
+ * Note that _name must not be NULL.
+ */
+#define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
+ { .name = (_name), .key = (void *)(_key), }
+
+#define STATIC_LOCKDEP_MAP_INIT_WAIT(_name, _key, _inner) \
+ { .name = (_name), .key = (void *)(_key), .wait_type_inner = (_inner), }
+
+#define STATIC_LOCKDEP_MAP_INIT_TYPE(_name, _key, _inner, _outer, _type) \
+ { .name = (_name), .key = (void *)(_key), .wait_type_inner = (_inner), \
+ .wait_type_outer = (_outer), .lock_type = (_type), }
/*
* Compare locking classes
*/
@@ -377,6 +390,10 @@ static inline void lockdep_set_selftest_task(struct task_struct *task)

#define lockdep_set_novalidate_class(lock) do { } while (0)

+#define STATIC_LOCKDEP_MAP_INIT(_name, _key) { }
+#define STATIC_LOCKDEP_MAP_INIT_WAIT(_name, _key, _inner) { }
+#define STATIC_LOCKDEP_MAP_INIT_TYPE(_name, _key, _inner, _outer, _type) { }
+
/*
* We don't define lockdep_match_class() and lockdep_match_key() for !LOCKDEP
* case since the result is not well defined and the caller should rather
@@ -432,12 +449,6 @@ enum xhlock_context_t {
};

#define lockdep_init_map_crosslock(m, n, k, s) do {} while (0)
-/*
- * To initialize a lockdep_map statically use this macro.
- * Note that _name must not be NULL.
- */
-#define STATIC_LOCKDEP_MAP_INIT(_name, _key) \
- { .name = (_name), .key = (void *)(_key), }

static inline void lockdep_invariant_state(bool force) {}
static inline void lockdep_free_task(struct task_struct *task) {}
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 369c1abbf3d0..b2d018250a41 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -22,10 +22,8 @@

#ifdef CONFIG_LOCK_INFO
# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
- , .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_SLEEP, \
- }
+ , .dep_map = STATIC_LOCKDEP_MAP_INIT_WAIT(#lockname, \
+ NULL, LD_WAIT_SLEEP)
#else
# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
#endif
diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index 8eafdd6dcf35..887ffcd5fc09 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -77,11 +77,9 @@ do { \
} while (0)

#ifdef CONFIG_LOCK_INFO
-#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) \
- .dep_map = { \
- .name = #mutexname, \
- .wait_type_inner = LD_WAIT_SLEEP, \
- }
+#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) \
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_WAIT(#mutexname, \
+ NULL, LD_WAIT_SLEEP)
#else
#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
#endif
diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h
index 3e621bfd7cd0..438d8639a229 100644
--- a/include/linux/rwlock_types.h
+++ b/include/linux/rwlock_types.h
@@ -7,10 +7,7 @@

#ifdef CONFIG_LOCK_INFO
# define RW_DEP_MAP_INIT(lockname) \
- .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_CONFIG, \
- }
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_WAIT(#lockname, NULL, LD_WAIT_CONFIG)
#else
# define RW_DEP_MAP_INIT(lockname)
#endif
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index c488485861f5..39126e6d97a1 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -17,11 +17,9 @@
#include <linux/err.h>

#ifdef CONFIG_LOCK_INFO
-# define __RWSEM_DEP_MAP_INIT(lockname) \
- .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_SLEEP, \
- },
+# define __RWSEM_DEP_MAP_INIT(lockname) \
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_WAIT(#lockname, \
+ NULL, LD_WAIT_SLEEP),
#else
# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h
index 564092a30cc4..006250640e76 100644
--- a/include/linux/spinlock_types_raw.h
+++ b/include/linux/spinlock_types_raw.h
@@ -27,23 +27,15 @@ typedef struct raw_spinlock {
#define SPINLOCK_OWNER_INIT ((void *)-1L)

#ifdef CONFIG_LOCK_INFO
-# define RAW_SPIN_DEP_MAP_INIT(lockname) \
- .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_SPIN, \
- }
-# define SPIN_DEP_MAP_INIT(lockname) \
- .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_CONFIG, \
- }
+# define RAW_SPIN_DEP_MAP_INIT(lockname) \
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_WAIT(#lockname, NULL, LD_WAIT_SPIN)

-# define LOCAL_SPIN_DEP_MAP_INIT(lockname) \
- .dep_map = { \
- .name = #lockname, \
- .wait_type_inner = LD_WAIT_CONFIG, \
- .lock_type = LD_LOCK_PERCPU, \
- }
+# define SPIN_DEP_MAP_INIT(lockname) \
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_WAIT(#lockname, NULL, LD_WAIT_CONFIG)
+
+# define LOCAL_SPIN_DEP_MAP_INIT(lockname) \
+ .dep_map = STATIC_LOCKDEP_MAP_INIT_TYPE(#lockname, NULL, LD_WAIT_CONFIG,\
+ LD_WAIT_INV, LD_LOCK_PERCPU)
#else
# define RAW_SPIN_DEP_MAP_INIT(lockname)
# define SPIN_DEP_MAP_INIT(lockname)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index e45664e0ca30..7889df01a378 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -94,9 +94,8 @@ EXPORT_SYMBOL_GPL(console_drivers);
int __read_mostly suppress_printk;

#ifdef CONFIG_LOCK_INFO
-static struct lockdep_map console_lock_dep_map = {
- .name = "console_lock"
-};
+static struct lockdep_map console_lock_dep_map =
+ STATIC_LOCKDEP_MAP_INIT("console_lock", NULL);
#endif

enum devkmsg_log_bits {
@@ -1753,9 +1752,8 @@ SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len)
*/

#ifdef CONFIG_LOCK_INFO
-static struct lockdep_map console_owner_dep_map = {
- .name = "console_owner"
-};
+static struct lockdep_map console_owner_dep_map =
+ STATIC_LOCKDEP_MAP_INIT("console_owner", NULL);
#endif

static DEFINE_RAW_SPINLOCK(console_owner_lock);
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 156892c22bb5..8202ab6ddb4c 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -243,30 +243,21 @@ core_initcall(rcu_set_runtime_mode);

#ifdef CONFIG_DEBUG_LOCK_ALLOC
static struct lock_class_key rcu_lock_key;
-struct lockdep_map rcu_lock_map = {
- .name = "rcu_read_lock",
- .key = &rcu_lock_key,
- .wait_type_outer = LD_WAIT_FREE,
- .wait_type_inner = LD_WAIT_CONFIG, /* PREEMPT_RT implies PREEMPT_RCU */
-};
+struct lockdep_map rcu_lock_map = /* PREEMPT_RT implies PREEMPT_RCU */
+ STATIC_LOCKDEP_MAP_INIT_TYPE("rcu_read_lock", &rcu_lock_key,
+ LD_WAIT_CONFIG, LD_WAIT_FREE, 0);
EXPORT_SYMBOL_GPL(rcu_lock_map);

static struct lock_class_key rcu_bh_lock_key;
-struct lockdep_map rcu_bh_lock_map = {
- .name = "rcu_read_lock_bh",
- .key = &rcu_bh_lock_key,
- .wait_type_outer = LD_WAIT_FREE,
- .wait_type_inner = LD_WAIT_CONFIG, /* PREEMPT_RT makes BH preemptible. */
-};
+struct lockdep_map rcu_bh_lock_map = /* PREEMPT_RT makes BH preemptable. */
+ STATIC_LOCKDEP_MAP_INIT_TYPE("rcu_read_lock_bh", &rcu_bh_lock_key,
+ LD_WAIT_CONFIG, LD_WAIT_FREE, 0);
EXPORT_SYMBOL_GPL(rcu_bh_lock_map);

static struct lock_class_key rcu_sched_lock_key;
-struct lockdep_map rcu_sched_lock_map = {
- .name = "rcu_read_lock_sched",
- .key = &rcu_sched_lock_key,
- .wait_type_outer = LD_WAIT_FREE,
- .wait_type_inner = LD_WAIT_SPIN,
-};
+struct lockdep_map rcu_sched_lock_map =
+ STATIC_LOCKDEP_MAP_INIT_TYPE("rcu_read_lock_sched", &rcu_sched_lock_key,
+ LD_WAIT_SPIN, LD_WAIT_FREE, 0);
EXPORT_SYMBOL_GPL(rcu_sched_lock_map);

// Tell lockdep when RCU callbacks are being invoked.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d4ecfdd5eb8f..a561a6c66b2f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1672,9 +1672,8 @@ static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_memcg,
}

#ifdef CONFIG_LOCK_INFO
-static struct lockdep_map memcg_oom_lock_dep_map = {
- .name = "memcg_oom_lock",
-};
+static struct lockdep_map memcg_oom_lock_dep_map =
+ STATIC_LOCKDEP_MAP_INIT("memcg_oom_lock", NULL);
#endif

static DEFINE_SPINLOCK(memcg_oom_lock);
--
2.35.0.263.gb82422642f-goog


2022-02-09 09:44:25

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 03/12] timer: Protect lockdep functions with #ifdef

With upcoming lock tracepoints config, it'd define some of lockdep
functions without enabling CONFIG_LOCKDEP actually. The existing code
assumes those functions will be removed by the preprocessor but it's
not the case anymore. Let's protect the code with #ifdef's explicitly.

Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Namhyung Kim <[email protected]>
---
kernel/time/timer.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 85f1021ad459..4af95dbf6435 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -794,7 +794,10 @@ static void do_init_timer(struct timer_list *timer,
if (WARN_ON_ONCE(flags & ~TIMER_INIT_FLAGS))
flags &= TIMER_INIT_FLAGS;
timer->flags = flags | raw_smp_processor_id();
+
+#ifdef CONFIG_LOCKDEP
lockdep_init_map(&timer->lockdep_map, name, key, 0);
+#endif
}

/**
@@ -1409,19 +1412,22 @@ static void call_timer_fn(struct timer_list *timer,
struct lockdep_map lockdep_map;

lockdep_copy_map(&lockdep_map, &timer->lockdep_map);
-#endif
+
/*
* Couple the lock chain with the lock chain at
* del_timer_sync() by acquiring the lock_map around the fn()
* call here and in del_timer_sync().
*/
lock_map_acquire(&lockdep_map);
+#endif

trace_timer_expire_entry(timer, baseclk);
fn(timer);
trace_timer_expire_exit(timer);

+#ifdef CONFIG_LOCKDEP
lock_map_release(&lockdep_map);
+#endif

if (count != preempt_count()) {
WARN_ONCE(1, "timer: %pS preempt leak: %08x -> %08x\n",
--
2.35.0.263.gb82422642f-goog


2022-02-09 10:07:09

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 02/12] cgroup: rstat: Make cgroup_rstat_cpu_lock name readable

The raw_spin_lock_init() uses the argument to name its lockdep map.
But passing per_cpu_ptr() macro directly makes it a very very long
name as it expanded like below:

({ do { const void *__vpp_verify = (typeof((&cgroup_rstat_cpu_lock) ...

Let's fix it by passing a local variable instead. With this change,
the name now looks like:

cgrp_rstat_cpu_lock

Cc: Tejun Heo <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: [email protected]
Signed-off-by: Namhyung Kim <[email protected]>
---
kernel/cgroup/rstat.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 9d331ba44870..d1845f1196c9 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -286,9 +286,12 @@ void cgroup_rstat_exit(struct cgroup *cgrp)
void __init cgroup_rstat_boot(void)
{
int cpu;
+ raw_spinlock_t *cgrp_rstat_cpu_lock;

- for_each_possible_cpu(cpu)
- raw_spin_lock_init(per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu));
+ for_each_possible_cpu(cpu) {
+ cgrp_rstat_cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
+ raw_spin_lock_init(cgrp_rstat_cpu_lock);
+ }
}

/*
--
2.35.0.263.gb82422642f-goog


2022-02-09 11:09:19

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 04/12] workqueue: Protect lockdep functions with #ifdef

With upcoming lock tracepoints config, it'd define some of lockdep
functions without enabling CONFIG_LOCKDEP actually. The existing code
assumes those functions will be removed by the preprocessor but it's
not the case anymore. Let's protect the code with #ifdef's explicitly.

Cc: Tejun Heo <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Signed-off-by: Namhyung Kim <[email protected]>
---
kernel/workqueue.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 33f1106b4f99..405e27385f74 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2279,8 +2279,11 @@ __acquires(&pool->lock)

raw_spin_unlock_irq(&pool->lock);

+#ifdef CONFIG_LOCKDEP
lock_map_acquire(&pwq->wq->lockdep_map);
lock_map_acquire(&lockdep_map);
+#endif
+
/*
* Strictly speaking we should mark the invariant state without holding
* any locks, that is, before these two lock_map_acquire()'s.
@@ -2310,8 +2313,11 @@ __acquires(&pool->lock)
* point will only record its address.
*/
trace_workqueue_execute_end(work, worker->current_func);
+
+#ifdef CONFIG_LOCKDEP
lock_map_release(&lockdep_map);
lock_map_release(&pwq->wq->lockdep_map);
+#endif

if (unlikely(in_atomic() || lockdep_depth(current) > 0)) {
pr_err("BUG: workqueue leaked lock or atomic: %s/0x%08x/%d\n"
@@ -2824,8 +2830,10 @@ void flush_workqueue(struct workqueue_struct *wq)
if (WARN_ON(!wq_online))
return;

+#ifdef CONFIG_LOCKDEP
lock_map_acquire(&wq->lockdep_map);
lock_map_release(&wq->lockdep_map);
+#endif

mutex_lock(&wq->mutex);

@@ -3052,6 +3060,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
insert_wq_barrier(pwq, barr, work, worker);
raw_spin_unlock_irq(&pool->lock);

+#ifdef CONFIG_LOCKDEP
/*
* Force a lock recursion deadlock when using flush_work() inside a
* single-threaded or rescuer equipped workqueue.
@@ -3066,6 +3075,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
lock_map_acquire(&pwq->wq->lockdep_map);
lock_map_release(&pwq->wq->lockdep_map);
}
+#endif
+
rcu_read_unlock();
return true;
already_gone:
@@ -3084,10 +3095,12 @@ static bool __flush_work(struct work_struct *work, bool from_cancel)
if (WARN_ON(!work->func))
return false;

+#ifdef CONFIG_LOCKDEP
if (!from_cancel) {
lock_map_acquire(&work->lockdep_map);
lock_map_release(&work->lockdep_map);
}
+#endif

if (start_flush_work(work, &barr, from_cancel)) {
wait_for_completion(&barr.done);
--
2.35.0.263.gb82422642f-goog


2022-02-09 11:30:07

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 01/12] locking: Pass correct outer wait type info

In lockdep_init_map_wait(), it didn't pass the given outer argument
to lockdep_init_map_type and use LD_WAIT_INV unconditionally. It
seems like a copy-and-paste bug, let's fix it.

Signed-off-by: Namhyung Kim <[email protected]>
---
include/linux/lockdep.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 467b94257105..0cc2b338a006 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -192,7 +192,7 @@ static inline void
lockdep_init_map_waits(struct lockdep_map *lock, const char *name,
struct lock_class_key *key, int subclass, u8 inner, u8 outer)
{
- lockdep_init_map_type(lock, name, key, subclass, inner, LD_WAIT_INV, LD_LOCK_NORMAL);
+ lockdep_init_map_type(lock, name, key, subclass, inner, outer, LD_LOCK_NORMAL);
}

static inline void
--
2.35.0.263.gb82422642f-goog


2022-02-09 11:30:14

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 07/12] locking: Introduce CONFIG_LOCK_INFO

This is a preparatory work to have tracepoints separated from the
lockdep. I'd like to keep minimal information of locks (name, for
now) to be used by the tracepoints in the lockdep_map structure.

To make the work easier, CONFIG_LOCK_INFO was added to indicate that
it needs to save the lock info. And convert existing code using lock
information to depend on it rather than CONFIG_LOCKDEP and/or
CONFIG_DEBUG_LOCK_ALLOC directly. Future users of the lock
information should select it too.

Signed-off-by: Namhyung Kim <[email protected]>
---
drivers/gpu/drm/drm_connector.c | 2 +-
drivers/gpu/drm/i915/i915_sw_fence.h | 2 +-
drivers/gpu/drm/i915/selftests/lib_sw_fence.h | 2 +-
drivers/net/wireless/intel/iwlwifi/iwl-trans.c | 4 ++--
drivers/net/wireless/intel/iwlwifi/iwl-trans.h | 2 +-
drivers/tty/tty_ldsem.c | 2 +-
fs/btrfs/disk-io.c | 2 +-
fs/btrfs/disk-io.h | 2 +-
fs/cifs/connect.c | 2 +-
fs/kernfs/file.c | 2 +-
include/linux/completion.h | 2 +-
include/linux/jbd2.h | 2 +-
include/linux/kernfs.h | 2 +-
include/linux/kthread.h | 2 +-
include/linux/local_lock_internal.h | 8 ++++----
include/linux/mmu_notifier.h | 2 +-
include/linux/mutex.h | 8 ++++----
include/linux/percpu-rwsem.h | 4 ++--
include/linux/regmap.h | 4 ++--
include/linux/rtmutex.h | 6 +++---
include/linux/rwlock_api_smp.h | 4 ++--
include/linux/rwlock_rt.h | 4 ++--
include/linux/rwlock_types.h | 6 +++---
include/linux/rwsem.h | 6 +++---
include/linux/seqlock.h | 4 ++--
include/linux/spinlock_api_smp.h | 4 ++--
include/linux/spinlock_rt.h | 4 ++--
include/linux/spinlock_types.h | 4 ++--
include/linux/spinlock_types_raw.h | 4 ++--
include/linux/swait.h | 2 +-
include/linux/tty_ldisc.h | 2 +-
include/linux/wait.h | 2 +-
include/linux/ww_mutex.h | 6 +++---
include/media/v4l2-ctrls.h | 2 +-
include/net/sock.h | 2 +-
kernel/locking/mutex-debug.c | 2 +-
kernel/locking/mutex.c | 16 ++++++++--------
kernel/locking/percpu-rwsem.c | 2 +-
kernel/locking/rtmutex_api.c | 10 +++++-----
kernel/locking/rwsem.c | 4 ++--
kernel/locking/spinlock.c | 2 +-
kernel/locking/spinlock_debug.c | 4 ++--
kernel/locking/spinlock_rt.c | 8 ++++----
kernel/locking/ww_rt_mutex.c | 2 +-
kernel/printk/printk.c | 4 ++--
lib/Kconfig.debug | 5 +++++
mm/memcontrol.c | 2 +-
mm/mmu_notifier.c | 2 +-
net/core/dev.c | 2 +-
net/sunrpc/svcsock.c | 2 +-
net/sunrpc/xprtsock.c | 2 +-
51 files changed, 96 insertions(+), 91 deletions(-)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index a50c82bc2b2f..94931b32a491 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -676,7 +676,7 @@ const char *drm_get_connector_force_name(enum drm_connector_force force)
}
}

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
static struct lockdep_map connector_list_iter_dep_map = {
.name = "drm_connector_list_iter"
};
diff --git a/drivers/gpu/drm/i915/i915_sw_fence.h b/drivers/gpu/drm/i915/i915_sw_fence.h
index a7c603bc1b01..8c05d161a069 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence.h
+++ b/drivers/gpu/drm/i915/i915_sw_fence.h
@@ -43,7 +43,7 @@ void __i915_sw_fence_init(struct i915_sw_fence *fence,
i915_sw_fence_notify_t fn,
const char *name,
struct lock_class_key *key);
-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
#define i915_sw_fence_init(fence, fn) \
do { \
static struct lock_class_key __key; \
diff --git a/drivers/gpu/drm/i915/selftests/lib_sw_fence.h b/drivers/gpu/drm/i915/selftests/lib_sw_fence.h
index e54d6bc23dc3..ad7de5187830 100644
--- a/drivers/gpu/drm/i915/selftests/lib_sw_fence.h
+++ b/drivers/gpu/drm/i915/selftests/lib_sw_fence.h
@@ -12,7 +12,7 @@

#include "../i915_sw_fence.h"

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
#define onstack_fence_init(fence) \
do { \
static struct lock_class_key __key; \
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
index 9236f9106826..c0e7038a939f 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
@@ -21,7 +21,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
const struct iwl_cfg_trans_params *cfg_trans)
{
struct iwl_trans *trans;
-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
static struct lock_class_key __key;
#endif

@@ -31,7 +31,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,

trans->trans_cfg = cfg_trans;

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
lockdep_init_map(&trans->sync_cmd_lockdep_map, "sync_cmd_lockdep_map",
&__key, 0);
#endif
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 1bcaa3598785..47ef1e852d85 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -1011,7 +1011,7 @@ struct iwl_trans {

struct dentry *dbgfs_dir;

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map sync_cmd_lockdep_map;
#endif

diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c
index 3be428c16260..87c44f3f0c27 100644
--- a/drivers/tty/tty_ldsem.c
+++ b/drivers/tty/tty_ldsem.c
@@ -57,7 +57,7 @@ struct ldsem_waiter {
void __init_ldsem(struct ld_semaphore *sem, const char *name,
struct lock_class_key *key)
{
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/*
* Make sure we are not reinitializing a held semaphore:
*/
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index be41e35bee92..2a3a257ced49 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -144,7 +144,7 @@ struct async_submit_bio {
* same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this code
* needs update as well.
*/
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# if BTRFS_MAX_LEVEL != 8
# error
# endif
diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
index 5e8bef4b7563..31ff5e95cb92 100644
--- a/fs/btrfs/disk-io.h
+++ b/fs/btrfs/disk-io.h
@@ -148,7 +148,7 @@ int btrfs_init_root_free_objectid(struct btrfs_root *root);
int __init btrfs_end_io_wq_init(void);
void __cold btrfs_end_io_wq_exit(void);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void btrfs_set_buffer_lockdep_class(u64 objectid,
struct extent_buffer *eb, int level);
#else
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 11a22a30ee14..b117027019b2 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -2573,7 +2573,7 @@ cifs_match_super(struct super_block *sb, void *data)
return rc;
}

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
static struct lock_class_key cifs_key[2];
static struct lock_class_key cifs_slock_key[2];

diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index 9414a7a60a9f..5cb58ec61ba8 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -994,7 +994,7 @@ struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
kn->ns = ns;
kn->priv = priv;

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
if (key) {
lockdep_init_map(&kn->dep_map, "kn->active", key, 0);
kn->flags |= KERNFS_LOCKDEP;
diff --git a/include/linux/completion.h b/include/linux/completion.h
index 51d9ab079629..b6c408e62291 100644
--- a/include/linux/completion.h
+++ b/include/linux/completion.h
@@ -64,7 +64,7 @@ static inline void complete_release(struct completion *x) {}
* This macro declares and initializes a completion structure on the kernel
* stack.
*/
-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
# define DECLARE_COMPLETION_ONSTACK(work) \
struct completion work = COMPLETION_INITIALIZER_ONSTACK(work)
# define DECLARE_COMPLETION_ONSTACK_MAP(work, map) \
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
index fd933c45281a..990a87ff8ab0 100644
--- a/include/linux/jbd2.h
+++ b/include/linux/jbd2.h
@@ -1275,7 +1275,7 @@ struct journal_s
*/
__u32 j_csum_seed;

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/**
* @j_trans_commit_map:
*
diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
index 861c4f0f8a29..48c5c02395cf 100644
--- a/include/linux/kernfs.h
+++ b/include/linux/kernfs.h
@@ -131,7 +131,7 @@ struct kernfs_elem_attr {
struct kernfs_node {
atomic_t count;
atomic_t active;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
/*
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 3df4ea04716f..d0d5ca007c7a 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -172,7 +172,7 @@ struct kthread_delayed_work {
* kthread_worker.lock needs its own lockdep class key when defined on
* stack with lockdep enabled. Use the following macros in such cases.
*/
-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
# define KTHREAD_WORKER_INIT_ONSTACK(worker) \
({ kthread_init_worker(&worker); worker; })
# define DEFINE_KTHREAD_WORKER_ONSTACK(worker) \
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index 975e33b793a7..56f03f588aa7 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -9,13 +9,13 @@
#ifndef CONFIG_PREEMPT_RT

typedef struct {
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
struct task_struct *owner;
#endif
} local_lock_t;

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# define LOCAL_LOCK_DEBUG_INIT(lockname) \
.dep_map = { \
.name = #lockname, \
@@ -42,12 +42,12 @@ static inline void local_lock_debug_init(local_lock_t *l)
{
l->owner = NULL;
}
-#else /* CONFIG_DEBUG_LOCK_ALLOC */
+#else /* CONFIG_LOCK_INFO */
# define LOCAL_LOCK_DEBUG_INIT(lockname)
static inline void local_lock_acquire(local_lock_t *l) { }
static inline void local_lock_release(local_lock_t *l) { }
static inline void local_lock_debug_init(local_lock_t *l) { }
-#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+#endif /* !CONFIG_LOCK_INFO */

#define INIT_LOCAL_LOCK(lockname) { LOCAL_LOCK_DEBUG_INIT(lockname) }

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 45fc2c81e370..ff88b1f5b173 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -264,7 +264,7 @@ struct mmu_interval_notifier {

#ifdef CONFIG_MMU_NOTIFIER

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
#endif

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 8f226d460f51..369c1abbf3d0 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -20,7 +20,7 @@
#include <linux/osq_lock.h>
#include <linux/debug_locks.h>

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
, .dep_map = { \
.name = #lockname, \
@@ -70,7 +70,7 @@ struct mutex {
#ifdef CONFIG_DEBUG_MUTEXES
void *magic;
#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};
@@ -134,7 +134,7 @@ extern bool mutex_is_locked(struct mutex *lock);

struct mutex {
struct rt_mutex_base rtmutex;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};
@@ -174,7 +174,7 @@ do { \
* See kernel/locking/mutex.c for detailed documentation of these APIs.
* Also see Documentation/locking/mutex-design.rst.
*/
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);

diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 5fda40f97fe9..9d2427579d9a 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -15,12 +15,12 @@ struct percpu_rw_semaphore {
struct rcuwait writer;
wait_queue_head_t waiters;
atomic_t block;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname },
#else
#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname)
diff --git a/include/linux/regmap.h b/include/linux/regmap.h
index 22652e5fbc38..174493a0512a 100644
--- a/include/linux/regmap.h
+++ b/include/linux/regmap.h
@@ -666,12 +666,12 @@ struct regmap *__devm_regmap_init_spi_avmm(struct spi_device *spi,
const char *lock_name);
/*
* Wrapper for regmap_init macros to include a unique lockdep key and name
- * for each call. No-op if CONFIG_LOCKDEP is not set.
+ * for each call. No-op if CONFIG_LOCK_INFO is not set.
*
* @fn: Real function to call (in the form __[*_]regmap_init[_*])
* @name: Config variable name (#config in the calling macro)
**/
-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
#define __regmap_lockdep_wrapper(fn, name, ...) \
( \
({ \
diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index 7d049883a08a..8eafdd6dcf35 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -56,7 +56,7 @@ extern void rt_mutex_base_init(struct rt_mutex_base *rtb);
*/
struct rt_mutex {
struct rt_mutex_base rtmutex;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};
@@ -76,7 +76,7 @@ do { \
__rt_mutex_init(mutex, __func__, &__key); \
} while (0)

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) \
.dep_map = { \
.name = #mutexname, \
@@ -97,7 +97,7 @@ do { \

extern void __rt_mutex_init(struct rt_mutex *lock, const char *name, struct lock_class_key *key);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
extern void rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int subclass);
extern void _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockdep_map *nest_lock);
#define rt_mutex_lock(lock) rt_mutex_lock_nested(lock, 0)
diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h
index dceb0a59b692..7fb42c921669 100644
--- a/include/linux/rwlock_api_smp.h
+++ b/include/linux/rwlock_api_smp.h
@@ -142,7 +142,7 @@ static inline int __raw_write_trylock(rwlock_t *lock)
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
* not re-enabled during lock-acquire (which the preempt-spin-ops do):
*/
-#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
+#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_LOCK_INFO)

static inline void __raw_read_lock(rwlock_t *lock)
{
@@ -217,7 +217,7 @@ static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass)
LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
}

-#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
+#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_LOCK_INFO */

static inline void __raw_write_unlock(rwlock_t *lock)
{
diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index 8544ff05e594..140c418c51b5 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -6,7 +6,7 @@
#error Do not #include directly. Use <linux/spinlock.h>.
#endif

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
extern void __rt_rwlock_init(rwlock_t *rwlock, const char *name,
struct lock_class_key *key);
#else
@@ -84,7 +84,7 @@ static __always_inline void write_lock(rwlock_t *rwlock)
rt_write_lock(rwlock);
}

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
{
rt_write_lock_nested(rwlock, subclass);
diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h
index 1948442e7750..3e621bfd7cd0 100644
--- a/include/linux/rwlock_types.h
+++ b/include/linux/rwlock_types.h
@@ -5,7 +5,7 @@
# error "Do not include directly, include spinlock_types.h"
#endif

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# define RW_DEP_MAP_INIT(lockname) \
.dep_map = { \
.name = #lockname, \
@@ -28,7 +28,7 @@ typedef struct {
unsigned int magic, owner_cpu;
void *owner;
#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
} rwlock_t;
@@ -57,7 +57,7 @@ typedef struct {
typedef struct {
struct rwbase_rt rwbase;
atomic_t readers;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
} rwlock_t;
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index f9348769e558..c488485861f5 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -16,7 +16,7 @@
#include <linux/atomic.h>
#include <linux/err.h>

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# define __RWSEM_DEP_MAP_INIT(lockname) \
.dep_map = { \
.name = #lockname, \
@@ -60,7 +60,7 @@ struct rw_semaphore {
#ifdef CONFIG_DEBUG_RWSEMS
void *magic;
#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};
@@ -127,7 +127,7 @@ static inline int rwsem_is_contended(struct rw_semaphore *sem)

struct rw_semaphore {
struct rwbase_rt rwbase;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 37ded6b8fee6..c673f807965e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -64,7 +64,7 @@
*/
typedef struct seqcount {
unsigned sequence;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
} seqcount_t;
@@ -79,7 +79,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name,
s->sequence = 0;
}

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO

# define SEQCOUNT_DEP_MAP_INIT(lockname) \
.dep_map = { .name = #lockname }
diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h
index 51fa0dab68c4..94e5ddbcc2d1 100644
--- a/include/linux/spinlock_api_smp.h
+++ b/include/linux/spinlock_api_smp.h
@@ -99,7 +99,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock)
* even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are
* not re-enabled during lock-acquire (which the preempt-spin-ops do):
*/
-#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
+#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_LOCK_INFO)

static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock)
{
@@ -134,7 +134,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock)
LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
}

-#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
+#endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_LOCK_INFO */

static inline void __raw_spin_unlock(raw_spinlock_t *lock)
{
diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
index 835aedaf68ac..2605668e0fdd 100644
--- a/include/linux/spinlock_rt.h
+++ b/include/linux/spinlock_rt.h
@@ -6,7 +6,7 @@
#error Do not include directly. Use spinlock.h
#endif

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
extern void __rt_spin_lock_init(spinlock_t *lock, const char *name,
struct lock_class_key *key, bool percpu);
#else
@@ -45,7 +45,7 @@ static __always_inline void spin_lock(spinlock_t *lock)
rt_spin_lock(lock);
}

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
# define __spin_lock_nested(lock, subclass) \
rt_spin_lock_nested(lock, subclass)

diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h
index 2dfa35ffec76..fb9e778a0ee5 100644
--- a/include/linux/spinlock_types.h
+++ b/include/linux/spinlock_types.h
@@ -18,7 +18,7 @@ typedef struct spinlock {
union {
struct raw_spinlock rlock;

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
struct {
u8 __padding[LOCK_PADSIZE];
@@ -49,7 +49,7 @@ typedef struct spinlock {

typedef struct spinlock {
struct rt_mutex_base lock;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
} spinlock_t;
diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h
index 91cb36b65a17..564092a30cc4 100644
--- a/include/linux/spinlock_types_raw.h
+++ b/include/linux/spinlock_types_raw.h
@@ -17,7 +17,7 @@ typedef struct raw_spinlock {
unsigned int magic, owner_cpu;
void *owner;
#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
} raw_spinlock_t;
@@ -26,7 +26,7 @@ typedef struct raw_spinlock {

#define SPINLOCK_OWNER_INIT ((void *)-1L)

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
# define RAW_SPIN_DEP_MAP_INIT(lockname) \
.dep_map = { \
.name = #lockname, \
diff --git a/include/linux/swait.h b/include/linux/swait.h
index 6a8c22b8c2a5..643c9fe68d63 100644
--- a/include/linux/swait.h
+++ b/include/linux/swait.h
@@ -75,7 +75,7 @@ extern void __init_swait_queue_head(struct swait_queue_head *q, const char *name
__init_swait_queue_head((q), #q, &__key); \
} while (0)

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
# define __SWAIT_QUEUE_HEAD_INIT_ONSTACK(name) \
({ init_swait_queue_head(&name); name; })
# define DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(name) \
diff --git a/include/linux/tty_ldisc.h b/include/linux/tty_ldisc.h
index e85002b56752..5af6fb3649ab 100644
--- a/include/linux/tty_ldisc.h
+++ b/include/linux/tty_ldisc.h
@@ -20,7 +20,7 @@ struct ld_semaphore {
unsigned int wait_readers;
struct list_head read_wait;
struct list_head write_wait;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
};
diff --git a/include/linux/wait.h b/include/linux/wait.h
index 851e07da2583..aa811a05b070 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -70,7 +70,7 @@ extern void __init_waitqueue_head(struct wait_queue_head *wq_head, const char *n
__init_waitqueue_head((wq_head), #wq_head, &__key); \
} while (0)

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
# define __WAIT_QUEUE_HEAD_INIT_ONSTACK(name) \
({ init_waitqueue_head(&name); name; })
# define DECLARE_WAIT_QUEUE_HEAD_ONSTACK(name) \
diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index bb763085479a..12a6ad18176d 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -63,7 +63,7 @@ struct ww_acquire_ctx {
struct ww_class *ww_class;
void *contending_lock;
#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
@@ -142,7 +142,7 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
ctx->done_acquire = 0;
ctx->contending_lock = NULL;
#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
debug_check_no_locks_freed((void *)ctx, sizeof(*ctx));
lockdep_init_map(&ctx->dep_map, ww_class->acquire_name,
&ww_class->acquire_key, 0);
@@ -184,7 +184,7 @@ static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
*/
static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
{
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
mutex_release(&ctx->dep_map, _THIS_IP_);
#endif
#ifdef DEBUG_WW_MUTEXES
diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
index b3ce438f1329..f2c30a4fe203 100644
--- a/include/media/v4l2-ctrls.h
+++ b/include/media/v4l2-ctrls.h
@@ -489,7 +489,7 @@ int v4l2_ctrl_handler_init_class(struct v4l2_ctrl_handler *hdl,
unsigned int nr_of_controls_hint,
struct lock_class_key *key, const char *name);

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO

/**
* v4l2_ctrl_handler_init - helper function to create a static struct
diff --git a/include/net/sock.h b/include/net/sock.h
index ff9b508d9c5f..e88c7de283ed 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -103,7 +103,7 @@ typedef struct {
* the slock as a lock variant (in addition to
* the slock itself):
*/
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map dep_map;
#endif
} socket_lock_t;
diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index bc8abb8549d2..33fc3c06b714 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -79,7 +79,7 @@ void debug_mutex_unlock(struct mutex *lock)
void debug_mutex_init(struct mutex *lock, const char *name,
struct lock_class_key *key)
{
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/*
* Make sure we are not reinitializing a held lock:
*/
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 5e3585950ec8..8733b96ce20a 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -149,7 +149,7 @@ static inline bool __mutex_trylock(struct mutex *lock)
return !__mutex_trylock_common(lock, false);
}

-#ifndef CONFIG_DEBUG_LOCK_ALLOC
+#ifndef CONFIG_LOCK_INFO
/*
* Lockdep annotations are contained to the slow paths for simplicity.
* There is nothing that would stop spreading the lockdep annotations outwards
@@ -245,7 +245,7 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task)
}
}

-#ifndef CONFIG_DEBUG_LOCK_ALLOC
+#ifndef CONFIG_LOCK_INFO
/*
* We split the mutex lock/unlock logic into separate fastpath and
* slowpath functions, to reduce the register pressure on the fastpath.
@@ -533,7 +533,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
*/
void __sched mutex_unlock(struct mutex *lock)
{
-#ifndef CONFIG_DEBUG_LOCK_ALLOC
+#ifndef CONFIG_LOCK_INFO
if (__mutex_unlock_fast(lock))
return;
#endif
@@ -591,7 +591,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (ww_ctx->acquired == 0)
ww_ctx->wounded = 0;

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
nest_lock = &ww_ctx->dep_map;
#endif
}
@@ -778,7 +778,7 @@ int ww_mutex_trylock(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx)
}
EXPORT_SYMBOL(ww_mutex_trylock);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void __sched
mutex_lock_nested(struct mutex *lock, unsigned int subclass)
{
@@ -937,7 +937,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
wake_up_q(&wake_q);
}

-#ifndef CONFIG_DEBUG_LOCK_ALLOC
+#ifndef CONFIG_LOCK_INFO
/*
* Here come the less common (and hence less performance-critical) APIs:
* mutex_lock_interruptible() and mutex_trylock().
@@ -1078,7 +1078,7 @@ int __sched mutex_trylock(struct mutex *lock)
}
EXPORT_SYMBOL(mutex_trylock);

-#ifndef CONFIG_DEBUG_LOCK_ALLOC
+#ifndef CONFIG_LOCK_INFO
int __sched
ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
{
@@ -1109,7 +1109,7 @@ ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
}
EXPORT_SYMBOL(ww_mutex_lock_interruptible);

-#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+#endif /* !CONFIG_LOCK_INFO */
#endif /* !CONFIG_PREEMPT_RT */

/**
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index 70a32a576f3f..98ff434a5f95 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -20,7 +20,7 @@ int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
rcuwait_init(&sem->writer);
init_waitqueue_head(&sem->waiters);
atomic_set(&sem->block, 0);
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
debug_check_no_locks_freed((void *)sem, sizeof(*sem));
lockdep_init_map(&sem->dep_map, name, key, 0);
#endif
diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c
index 900220941caa..ce08dabf4f93 100644
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -40,7 +40,7 @@ void rt_mutex_base_init(struct rt_mutex_base *rtb)
}
EXPORT_SYMBOL(rt_mutex_base_init);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/**
* rt_mutex_lock_nested - lock a rt_mutex
*
@@ -59,7 +59,7 @@ void __sched _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockdep_map
}
EXPORT_SYMBOL_GPL(_rt_mutex_lock_nest_lock);

-#else /* !CONFIG_DEBUG_LOCK_ALLOC */
+#else /* !CONFIG_LOCK_INFO */

/**
* rt_mutex_lock - lock a rt_mutex
@@ -517,7 +517,7 @@ static __always_inline int __mutex_lock_common(struct mutex *lock,
return ret;
}

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void __sched mutex_lock_nested(struct mutex *lock, unsigned int subclass)
{
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_);
@@ -557,7 +557,7 @@ void __sched mutex_lock_io_nested(struct mutex *lock, unsigned int subclass)
}
EXPORT_SYMBOL_GPL(mutex_lock_io_nested);

-#else /* CONFIG_DEBUG_LOCK_ALLOC */
+#else /* CONFIG_LOCK_INFO */

void __sched mutex_lock(struct mutex *lock)
{
@@ -585,7 +585,7 @@ void __sched mutex_lock_io(struct mutex *lock)
io_schedule_finish(token);
}
EXPORT_SYMBOL(mutex_lock_io);
-#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+#endif /* !CONFIG_LOCK_INFO */

int __sched mutex_trylock(struct mutex *lock)
{
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 69aba4abe104..8da694940165 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -304,7 +304,7 @@ rwsem_owner_flags(struct rw_semaphore *sem, unsigned long *pflags)
void __init_rwsem(struct rw_semaphore *sem, const char *name,
struct lock_class_key *key)
{
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/*
* Make sure we are not reinitializing a held semaphore:
*/
@@ -1378,7 +1378,7 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name,
{
init_rwbase_rt(&(sem)->rwbase);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
debug_check_no_locks_freed((void *)sem, sizeof(*sem));
lockdep_init_map_wait(&sem->dep_map, name, key, 0, LD_WAIT_SLEEP);
#endif
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index 7f49baaa4979..e814ca0b76c3 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -34,7 +34,7 @@ EXPORT_PER_CPU_SYMBOL(__mmiowb_state);
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
* not re-enabled during lock-acquire (which the preempt-spin-ops do):
*/
-#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
+#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_LOCK_INFO)
/*
* The __lock_function inlines are taken from
* spinlock : include/linux/spinlock_api_smp.h
diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index 14235671a1a7..011f66515693 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -16,7 +16,7 @@
void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
struct lock_class_key *key, short inner)
{
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/*
* Make sure we are not reinitializing a held lock:
*/
@@ -35,7 +35,7 @@ EXPORT_SYMBOL(__raw_spin_lock_init);
void __rwlock_init(rwlock_t *lock, const char *name,
struct lock_class_key *key)
{
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
/*
* Make sure we are not reinitializing a held lock:
*/
diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c
index 48a19ed8486d..22cd3eb36c98 100644
--- a/kernel/locking/spinlock_rt.c
+++ b/kernel/locking/spinlock_rt.c
@@ -56,7 +56,7 @@ void __sched rt_spin_lock(spinlock_t *lock)
}
EXPORT_SYMBOL(rt_spin_lock);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void __sched rt_spin_lock_nested(spinlock_t *lock, int subclass)
{
spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
@@ -129,7 +129,7 @@ int __sched rt_spin_trylock_bh(spinlock_t *lock)
}
EXPORT_SYMBOL(rt_spin_trylock_bh);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void __rt_spin_lock_init(spinlock_t *lock, const char *name,
struct lock_class_key *key, bool percpu)
{
@@ -239,7 +239,7 @@ void __sched rt_write_lock(rwlock_t *rwlock)
}
EXPORT_SYMBOL(rt_write_lock);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void __sched rt_write_lock_nested(rwlock_t *rwlock, int subclass)
{
rtlock_might_resched();
@@ -269,7 +269,7 @@ void __sched rt_write_unlock(rwlock_t *rwlock)
}
EXPORT_SYMBOL(rt_write_unlock);

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
void __rt_rwlock_init(rwlock_t *rwlock, const char *name,
struct lock_class_key *key)
{
diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c
index d1473c624105..aecb2e4e5f07 100644
--- a/kernel/locking/ww_rt_mutex.c
+++ b/kernel/locking/ww_rt_mutex.c
@@ -56,7 +56,7 @@ __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx,
if (ww_ctx->acquired == 0)
ww_ctx->wounded = 0;

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
nest_lock = &ww_ctx->dep_map;
#endif
}
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 82abfaf3c2aa..e45664e0ca30 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -93,7 +93,7 @@ EXPORT_SYMBOL_GPL(console_drivers);
*/
int __read_mostly suppress_printk;

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
static struct lockdep_map console_lock_dep_map = {
.name = "console_lock"
};
@@ -1752,7 +1752,7 @@ SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len)
* They allow to pass console_lock to another printk() call using a busy wait.
*/

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
static struct lockdep_map console_owner_dep_map = {
.name = "console_owner"
};
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 14b89aa37c5c..5f64ffe23c35 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1394,6 +1394,7 @@ config LOCKDEP
select STACKTRACE
select KALLSYMS
select KALLSYMS_ALL
+ select LOCK_INFO

config LOCKDEP_SMALL
bool
@@ -1447,6 +1448,10 @@ config DEBUG_LOCKDEP
additional runtime checks to debug itself, at the price
of more runtime overhead.

+config LOCK_INFO
+ bool
+ default n
+
config DEBUG_ATOMIC_SLEEP
bool "Sleep inside atomic section checking"
select PREEMPT_COUNT
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 09d342c7cbd0..d4ecfdd5eb8f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1671,7 +1671,7 @@ static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_memcg,
return total;
}

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
static struct lockdep_map memcg_oom_lock_dep_map = {
.name = "memcg_oom_lock",
};
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 459d195d2ff6..26da67834cba 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -22,7 +22,7 @@
/* global SRCU for all MMs */
DEFINE_STATIC_SRCU(srcu);

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
.name = "mmu_notifier_invalidate_range_start"
};
diff --git a/net/core/dev.c b/net/core/dev.c
index 1baab07820f6..7548a6c606ca 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -406,7 +406,7 @@ static RAW_NOTIFIER_HEAD(netdev_chain);
DEFINE_PER_CPU_ALIGNED(struct softnet_data, softnet_data);
EXPORT_PER_CPU_SYMBOL(softnet_data);

-#ifdef CONFIG_LOCKDEP
+#ifdef CONFIG_LOCK_INFO
/*
* register_netdevice() inits txq->_xmit_lock and sets lockdep class
* according to dev->type
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 478f857cdaed..14d87c2d0df1 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -74,7 +74,7 @@ static void svc_sock_free(struct svc_xprt *);
static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
struct net *, struct sockaddr *,
int, int);
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
static struct lock_class_key svc_key[2];
static struct lock_class_key svc_slock_key[2];

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index d8ee06a9650a..cd66a6608ba2 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1719,7 +1719,7 @@ static void xs_local_set_port(struct rpc_xprt *xprt, unsigned short port)
{
}

-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCK_INFO
static struct lock_class_key xs_key[3];
static struct lock_class_key xs_slock_key[3];

--
2.35.0.263.gb82422642f-goog


2022-02-09 11:45:05

by Namhyung Kim

[permalink] [raw]
Subject: [PATCH 12/12] locking: Move lock_acquired() from the fast path

The lock_acquired() function is used by CONFIG_LOCK_STAT to track wait
time for contended locks. So it's meaningful only if the given lock
is in the slow path (contended). Let's move the call into the if
block so that we can skip it in the fast path. This also move the
tracepoint to be called only after lock_contended().

It might affect bounce_acquired stats rarely (if it's on a different
cpu than when you call lock_acquire) but I'm not sure it's possible in
uncontended cases. Otherwise, this should have no functional changes
in the LOCKDEP and LOCK_STAT.

Userspace tools that use the tracepoint might see the difference, but
I think most of them can handle the missing lock_acquired() in
non-contended case properly as it's the case when using a trylock
function to grab a lock. At least it seems ok for the perf
tools ('perf lock' command specifically).

Add similar change in the __mutex_lock_common() so that it can call
lock_acquired() only after lock_contended().

Signed-off-by: Namhyung Kim <[email protected]>
---
Documentation/locking/lockstat.rst | 4 ++--
include/linux/lockdep.h | 12 ++++++------
kernel/locking/mutex.c | 4 +---
3 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/Documentation/locking/lockstat.rst b/Documentation/locking/lockstat.rst
index 536eab8dbd99..3638ad1113c2 100644
--- a/Documentation/locking/lockstat.rst
+++ b/Documentation/locking/lockstat.rst
@@ -28,11 +28,11 @@ The graph below shows the relation between the lock functions and the various
| __contended
| |
| <wait>
+ | |
+ | __acquired
| _______/
|/
|
- __acquired
- |
.
<hold>
.
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 4e728d2957db..63b75ad2e17c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -559,8 +559,8 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
lock(_lock); \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} while (0)

#define LOCK_CONTENDED_RETURN(_lock, try, lock) \
@@ -569,9 +569,9 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
____err = lock(_lock); \
+ if (!____err) \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- if (!____err) \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
____err; \
})

@@ -600,8 +600,8 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
lock(_lock); \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} while (0)

#define LOCK_CONTENDED_RETURN(_lock, try, lock) \
@@ -610,9 +610,9 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
____err = lock(_lock); \
+ if (!____err) \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- if (!____err) \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
____err; \
})

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index f8bc4ae312a0..e67b5a16440b 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -605,8 +605,6 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

if (__mutex_trylock(lock) ||
mutex_optimistic_spin(lock, ww_ctx, NULL)) {
- /* got the lock, yay! */
- lock_acquired(&lock->dep_map, ip);
if (ww_ctx)
ww_mutex_set_context_fastpath(ww, ww_ctx);
preempt_enable();
@@ -708,10 +706,10 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

debug_mutex_free_waiter(&waiter);

-skip_wait:
/* got the lock - cleanup and rejoice! */
lock_acquired(&lock->dep_map, ip);

+skip_wait:
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);

--
2.35.0.263.gb82422642f-goog


2022-02-09 13:55:15

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:

> Eventually I'm mostly interested in the contended locks only and I
> want to reduce the overhead in the fast path. By moving that, it'd be
> easy to track contended locks with timing by using two tracepoints.

So why not put in two new tracepoints and call it a day?

Why muck about with all that lockdep stuff just to preserve the name
(and in the process continue to blow up data structures etc..). This
leaves distros in a bind, will they enable this config and provide
tracepoints while bloating the data structures and destroying things
like lockref (which relies on sizeof(spinlock_t)), or not provide this
at all.

Yes, the name is convenient, but it's just not worth it IMO. It makes
the whole proposition too much of a trade-off.

Would it not be possible to reconstruct enough useful information from
the lock callsite?

2022-02-09 18:57:44

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On 2/9/22 04:09, Peter Zijlstra wrote:
> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>
>> Eventually I'm mostly interested in the contended locks only and I
>> want to reduce the overhead in the fast path. By moving that, it'd be
>> easy to track contended locks with timing by using two tracepoints.
> So why not put in two new tracepoints and call it a day?
>
> Why muck about with all that lockdep stuff just to preserve the name
> (and in the process continue to blow up data structures etc..). This
> leaves distros in a bind, will they enable this config and provide
> tracepoints while bloating the data structures and destroying things
> like lockref (which relies on sizeof(spinlock_t)), or not provide this
> at all.
>
> Yes, the name is convenient, but it's just not worth it IMO. It makes
> the whole proposition too much of a trade-off.
>
> Would it not be possible to reconstruct enough useful information from
> the lock callsite?
>
I second that as I don't want to see the size of a spinlock exceeds 4
bytes in a production system.

Instead of storing additional information (e.g. lock name) directly into
the lock itself. Maybe we can store it elsewhere and use the lock
address as the key to locate it in a hash table. We can certainly extend
the various lock init functions to do that. It will be trickier for
statically initialized locks, but we can probably find a way to do that too.

Cheers,
Longman



2022-02-09 19:55:02

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

----- On Feb 9, 2022, at 1:19 PM, Waiman Long [email protected] wrote:

> On 2/9/22 04:09, Peter Zijlstra wrote:
>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>>
>>> Eventually I'm mostly interested in the contended locks only and I
>>> want to reduce the overhead in the fast path. By moving that, it'd be
>>> easy to track contended locks with timing by using two tracepoints.
>> So why not put in two new tracepoints and call it a day?
>>
>> Why muck about with all that lockdep stuff just to preserve the name
>> (and in the process continue to blow up data structures etc..). This
>> leaves distros in a bind, will they enable this config and provide
>> tracepoints while bloating the data structures and destroying things
>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
>> at all.
>>
>> Yes, the name is convenient, but it's just not worth it IMO. It makes
>> the whole proposition too much of a trade-off.
>>
>> Would it not be possible to reconstruct enough useful information from
>> the lock callsite?
>>
> I second that as I don't want to see the size of a spinlock exceeds 4
> bytes in a production system.
>
> Instead of storing additional information (e.g. lock name) directly into
> the lock itself. Maybe we can store it elsewhere and use the lock
> address as the key to locate it in a hash table. We can certainly extend
> the various lock init functions to do that. It will be trickier for
> statically initialized locks, but we can probably find a way to do that too.

If we go down that route, it would be nice if we can support a few different
use-cases for various tracers out there.

One use-case (a) requires the ability to query the lock name based on its address as key.
For this a hash table is a good fit. This would allow tracers like ftrace to
output lock names in its human-readable output which is formatted within the kernel.

Another use-case (b) is to be able to "dump" the lock { name, address } tuples
into the trace stream (we call this statedump events in lttng), and do the
translation from address to name at post-processing. This simply requires
that this information is available for iteration for both the core kernel
and module locks, so the tracer can dump this information on trace start
and module load.

Use-case (b) is very similar to what is done for the kernel tracepoints. Based
on this, implementing the init code that iterates on those sections and populates
a hash table for use-case (a) should be easy enough.

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2022-02-09 20:44:55

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:

> Hello,
>
> On Wed, Feb 9, 2022 at 11:02 AM Waiman Long <[email protected]> wrote:
>>
>> On 2/9/22 13:29, Mathieu Desnoyers wrote:
>> > ----- On Feb 9, 2022, at 1:19 PM, Waiman Long [email protected] wrote:
>> >
>> >> On 2/9/22 04:09, Peter Zijlstra wrote:
>> >>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>> >>>
>> >>>> Eventually I'm mostly interested in the contended locks only and I
>> >>>> want to reduce the overhead in the fast path. By moving that, it'd be
>> >>>> easy to track contended locks with timing by using two tracepoints.
>> >>> So why not put in two new tracepoints and call it a day?
>> >>>
>> >>> Why muck about with all that lockdep stuff just to preserve the name
>> >>> (and in the process continue to blow up data structures etc..). This
>> >>> leaves distros in a bind, will they enable this config and provide
>> >>> tracepoints while bloating the data structures and destroying things
>> >>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
>> >>> at all.
>> >>>
>> >>> Yes, the name is convenient, but it's just not worth it IMO. It makes
>> >>> the whole proposition too much of a trade-off.
>> >>>
>> >>> Would it not be possible to reconstruct enough useful information from
>> >>> the lock callsite?
>> >>>
>> >> I second that as I don't want to see the size of a spinlock exceeds 4
>> >> bytes in a production system.
>> >>
>> >> Instead of storing additional information (e.g. lock name) directly into
>> >> the lock itself. Maybe we can store it elsewhere and use the lock
>> >> address as the key to locate it in a hash table. We can certainly extend
>> >> the various lock init functions to do that. It will be trickier for
>> >> statically initialized locks, but we can probably find a way to do that too.
>> > If we go down that route, it would be nice if we can support a few different
>> > use-cases for various tracers out there.
>> >
>> > One use-case (a) requires the ability to query the lock name based on its
>> > address as key.
>> > For this a hash table is a good fit. This would allow tracers like ftrace to
>> > output lock names in its human-readable output which is formatted within the
>> > kernel.
>> >
>> > Another use-case (b) is to be able to "dump" the lock { name, address } tuples
>> > into the trace stream (we call this statedump events in lttng), and do the
>> > translation from address to name at post-processing. This simply requires
>> > that this information is available for iteration for both the core kernel
>> > and module locks, so the tracer can dump this information on trace start
>> > and module load.
>> >
>> > Use-case (b) is very similar to what is done for the kernel tracepoints. Based
>> > on this, implementing the init code that iterates on those sections and
>> > populates
>> > a hash table for use-case (a) should be easy enough.
>>
>> Yes, that are good use cases for this type of functionality. I do need
>> to think about how to do it for statically initialized lock first.
>
> Thank you all for the review and good suggestions.
>
> I'm also concerning dynamic allocated locks in a data structure.
> If we keep the info in a hash table, we should delete it when the
> lock is gone. I'm not sure we have a good place to hook it up all.

I was wondering about this use case as well. Can we make it mandatory to
declare the lock "class" (including the name) statically, even though the
lock per-se is allocated dynamically ? Then the initialization of the lock
embedded within the data structure would simply refer to the lock class
definition.

But perhaps I am missing something here.

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2022-02-09 20:45:40

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

Hello,

On Wed, Feb 9, 2022 at 11:02 AM Waiman Long <[email protected]> wrote:
>
> On 2/9/22 13:29, Mathieu Desnoyers wrote:
> > ----- On Feb 9, 2022, at 1:19 PM, Waiman Long [email protected] wrote:
> >
> >> On 2/9/22 04:09, Peter Zijlstra wrote:
> >>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
> >>>
> >>>> Eventually I'm mostly interested in the contended locks only and I
> >>>> want to reduce the overhead in the fast path. By moving that, it'd be
> >>>> easy to track contended locks with timing by using two tracepoints.
> >>> So why not put in two new tracepoints and call it a day?
> >>>
> >>> Why muck about with all that lockdep stuff just to preserve the name
> >>> (and in the process continue to blow up data structures etc..). This
> >>> leaves distros in a bind, will they enable this config and provide
> >>> tracepoints while bloating the data structures and destroying things
> >>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
> >>> at all.
> >>>
> >>> Yes, the name is convenient, but it's just not worth it IMO. It makes
> >>> the whole proposition too much of a trade-off.
> >>>
> >>> Would it not be possible to reconstruct enough useful information from
> >>> the lock callsite?
> >>>
> >> I second that as I don't want to see the size of a spinlock exceeds 4
> >> bytes in a production system.
> >>
> >> Instead of storing additional information (e.g. lock name) directly into
> >> the lock itself. Maybe we can store it elsewhere and use the lock
> >> address as the key to locate it in a hash table. We can certainly extend
> >> the various lock init functions to do that. It will be trickier for
> >> statically initialized locks, but we can probably find a way to do that too.
> > If we go down that route, it would be nice if we can support a few different
> > use-cases for various tracers out there.
> >
> > One use-case (a) requires the ability to query the lock name based on its address as key.
> > For this a hash table is a good fit. This would allow tracers like ftrace to
> > output lock names in its human-readable output which is formatted within the kernel.
> >
> > Another use-case (b) is to be able to "dump" the lock { name, address } tuples
> > into the trace stream (we call this statedump events in lttng), and do the
> > translation from address to name at post-processing. This simply requires
> > that this information is available for iteration for both the core kernel
> > and module locks, so the tracer can dump this information on trace start
> > and module load.
> >
> > Use-case (b) is very similar to what is done for the kernel tracepoints. Based
> > on this, implementing the init code that iterates on those sections and populates
> > a hash table for use-case (a) should be easy enough.
>
> Yes, that are good use cases for this type of functionality. I do need
> to think about how to do it for statically initialized lock first.

Thank you all for the review and good suggestions.

I'm also concerning dynamic allocated locks in a data structure.
If we keep the info in a hash table, we should delete it when the
lock is gone. I'm not sure we have a good place to hook it up all.

Thanks,
Namhyung

2022-02-09 20:46:03

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

----- On Feb 9, 2022, at 2:02 PM, Waiman Long [email protected] wrote:

> On 2/9/22 13:29, Mathieu Desnoyers wrote:
>> ----- On Feb 9, 2022, at 1:19 PM, Waiman Long [email protected] wrote:
>>
>>> On 2/9/22 04:09, Peter Zijlstra wrote:
>>>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>>>>
>>>>> Eventually I'm mostly interested in the contended locks only and I
>>>>> want to reduce the overhead in the fast path. By moving that, it'd be
>>>>> easy to track contended locks with timing by using two tracepoints.
>>>> So why not put in two new tracepoints and call it a day?
>>>>
>>>> Why muck about with all that lockdep stuff just to preserve the name
>>>> (and in the process continue to blow up data structures etc..). This
>>>> leaves distros in a bind, will they enable this config and provide
>>>> tracepoints while bloating the data structures and destroying things
>>>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
>>>> at all.
>>>>
>>>> Yes, the name is convenient, but it's just not worth it IMO. It makes
>>>> the whole proposition too much of a trade-off.
>>>>
>>>> Would it not be possible to reconstruct enough useful information from
>>>> the lock callsite?
>>>>
>>> I second that as I don't want to see the size of a spinlock exceeds 4
>>> bytes in a production system.
>>>
>>> Instead of storing additional information (e.g. lock name) directly into
>>> the lock itself. Maybe we can store it elsewhere and use the lock
>>> address as the key to locate it in a hash table. We can certainly extend
>>> the various lock init functions to do that. It will be trickier for
>>> statically initialized locks, but we can probably find a way to do that too.
>> If we go down that route, it would be nice if we can support a few different
>> use-cases for various tracers out there.
>>
>> One use-case (a) requires the ability to query the lock name based on its
>> address as key.
>> For this a hash table is a good fit. This would allow tracers like ftrace to
>> output lock names in its human-readable output which is formatted within the
>> kernel.
>>
>> Another use-case (b) is to be able to "dump" the lock { name, address } tuples
>> into the trace stream (we call this statedump events in lttng), and do the
>> translation from address to name at post-processing. This simply requires
>> that this information is available for iteration for both the core kernel
>> and module locks, so the tracer can dump this information on trace start
>> and module load.
>>
>> Use-case (b) is very similar to what is done for the kernel tracepoints. Based
>> on this, implementing the init code that iterates on those sections and
>> populates
>> a hash table for use-case (a) should be easy enough.
>
> Yes, that are good use cases for this type of functionality. I do need
> to think about how to do it for statically initialized lock first.

Tracepoints already solved that problem.

Look at the macro DEFINE_TRACE_FN() in include/linux/tracepoint.h. You will notice that
it statically defines a struct tracepoint in a separate section and a tracepoint_ptr_t
in a __tracepoints_ptrs section.

Then the other parts of the picture are in kernel/tracepoint.c:

extern tracepoint_ptr_t __start___tracepoints_ptrs[];
extern tracepoint_ptr_t __stop___tracepoints_ptrs[];

and kernel/module.c:find_module_sections()

#ifdef CONFIG_TRACEPOINTS
mod->tracepoints_ptrs = section_objs(info, "__tracepoints_ptrs",
sizeof(*mod->tracepoints_ptrs),
&mod->num_tracepoints);
#endif

and the iteration code over kernel and modules in kernel/tracepoint.c.

All you need in addition is in include/asm-generic/vmlinux.lds.h, we add
to the DATA_DATA define an entry such as:

STRUCT_ALIGN(); \
*(__tracepoints) \

and in RO_DATA:

. = ALIGN(8); \
__start___tracepoints_ptrs = .; \
KEEP(*(__tracepoints_ptrs)) /* Tracepoints: pointer array */ \
__stop___tracepoints_ptrs = .;

AFAIU, if you do something similar for a structure that contains your relevant
lock information, it should be straightforward to handle statically initialized
locks.

Thanks,

Mathieu


>
> Thanks,
> Longman

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2022-02-09 20:46:43

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Wed, Feb 9, 2022 at 11:28 AM Mathieu Desnoyers
<[email protected]> wrote:
>
> ----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:
> > I'm also concerning dynamic allocated locks in a data structure.
> > If we keep the info in a hash table, we should delete it when the
> > lock is gone. I'm not sure we have a good place to hook it up all.
>
> I was wondering about this use case as well. Can we make it mandatory to
> declare the lock "class" (including the name) statically, even though the
> lock per-se is allocated dynamically ? Then the initialization of the lock
> embedded within the data structure would simply refer to the lock class
> definition.

Isn't it still the same if we have static lock classes that the entry needs
to be deleted from the hash table when it frees the data structure?
I'm more concerned about free than alloc as there seems to be no
API to track that in a place.

Thanks,
Namhyung

2022-02-09 20:46:44

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On 2/9/22 14:17, Mathieu Desnoyers wrote:
> ----- On Feb 9, 2022, at 2:02 PM, Waiman Long [email protected] wrote:
>
>> On 2/9/22 13:29, Mathieu Desnoyers wrote:
>>> ----- On Feb 9, 2022, at 1:19 PM, Waiman Long [email protected] wrote:
>>>
>>>> On 2/9/22 04:09, Peter Zijlstra wrote:
>>>>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>>>>>
>>>>>> Eventually I'm mostly interested in the contended locks only and I
>>>>>> want to reduce the overhead in the fast path. By moving that, it'd be
>>>>>> easy to track contended locks with timing by using two tracepoints.
>>>>> So why not put in two new tracepoints and call it a day?
>>>>>
>>>>> Why muck about with all that lockdep stuff just to preserve the name
>>>>> (and in the process continue to blow up data structures etc..). This
>>>>> leaves distros in a bind, will they enable this config and provide
>>>>> tracepoints while bloating the data structures and destroying things
>>>>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
>>>>> at all.
>>>>>
>>>>> Yes, the name is convenient, but it's just not worth it IMO. It makes
>>>>> the whole proposition too much of a trade-off.
>>>>>
>>>>> Would it not be possible to reconstruct enough useful information from
>>>>> the lock callsite?
>>>>>
>>>> I second that as I don't want to see the size of a spinlock exceeds 4
>>>> bytes in a production system.
>>>>
>>>> Instead of storing additional information (e.g. lock name) directly into
>>>> the lock itself. Maybe we can store it elsewhere and use the lock
>>>> address as the key to locate it in a hash table. We can certainly extend
>>>> the various lock init functions to do that. It will be trickier for
>>>> statically initialized locks, but we can probably find a way to do that too.
>>> If we go down that route, it would be nice if we can support a few different
>>> use-cases for various tracers out there.
>>>
>>> One use-case (a) requires the ability to query the lock name based on its
>>> address as key.
>>> For this a hash table is a good fit. This would allow tracers like ftrace to
>>> output lock names in its human-readable output which is formatted within the
>>> kernel.
>>>
>>> Another use-case (b) is to be able to "dump" the lock { name, address } tuples
>>> into the trace stream (we call this statedump events in lttng), and do the
>>> translation from address to name at post-processing. This simply requires
>>> that this information is available for iteration for both the core kernel
>>> and module locks, so the tracer can dump this information on trace start
>>> and module load.
>>>
>>> Use-case (b) is very similar to what is done for the kernel tracepoints. Based
>>> on this, implementing the init code that iterates on those sections and
>>> populates
>>> a hash table for use-case (a) should be easy enough.
>> Yes, that are good use cases for this type of functionality. I do need
>> to think about how to do it for statically initialized lock first.
> Tracepoints already solved that problem.
>
> Look at the macro DEFINE_TRACE_FN() in include/linux/tracepoint.h. You will notice that
> it statically defines a struct tracepoint in a separate section and a tracepoint_ptr_t
> in a __tracepoints_ptrs section.
>
> Then the other parts of the picture are in kernel/tracepoint.c:
>
> extern tracepoint_ptr_t __start___tracepoints_ptrs[];
> extern tracepoint_ptr_t __stop___tracepoints_ptrs[];
>
> and kernel/module.c:find_module_sections()
>
> #ifdef CONFIG_TRACEPOINTS
> mod->tracepoints_ptrs = section_objs(info, "__tracepoints_ptrs",
> sizeof(*mod->tracepoints_ptrs),
> &mod->num_tracepoints);
> #endif
>
> and the iteration code over kernel and modules in kernel/tracepoint.c.
>
> All you need in addition is in include/asm-generic/vmlinux.lds.h, we add
> to the DATA_DATA define an entry such as:
>
> STRUCT_ALIGN(); \
> *(__tracepoints) \
>
> and in RO_DATA:
>
> . = ALIGN(8); \
> __start___tracepoints_ptrs = .; \
> KEEP(*(__tracepoints_ptrs)) /* Tracepoints: pointer array */ \
> __stop___tracepoints_ptrs = .;
>
> AFAIU, if you do something similar for a structure that contains your relevant
> lock information, it should be straightforward to handle statically initialized
> locks.
>
> Thanks,
>
> Mathieu

Thanks for the suggestion.

Cheers,
Longman


2022-02-09 20:48:06

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

----- On Feb 9, 2022, at 2:45 PM, Namhyung Kim [email protected] wrote:

> On Wed, Feb 9, 2022 at 11:28 AM Mathieu Desnoyers
> <[email protected]> wrote:
>>
>> ----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:
>> > I'm also concerning dynamic allocated locks in a data structure.
>> > If we keep the info in a hash table, we should delete it when the
>> > lock is gone. I'm not sure we have a good place to hook it up all.
>>
>> I was wondering about this use case as well. Can we make it mandatory to
>> declare the lock "class" (including the name) statically, even though the
>> lock per-se is allocated dynamically ? Then the initialization of the lock
>> embedded within the data structure would simply refer to the lock class
>> definition.
>
> Isn't it still the same if we have static lock classes that the entry needs
> to be deleted from the hash table when it frees the data structure?
> I'm more concerned about free than alloc as there seems to be no
> API to track that in a place.

If the lock class is defined statically, even for dynamically initialized
locks, then its associated object sits either in the core kernel or within
a module.

So if it's in the core kernel, it could be added to the hash table at kernel
init and stay there forever.

If it's in a module, it would be added to the hash table on module load and
removed on module unload. We would have to be careful about how the ftrace
printout to human readable text deals with missing hash table data, because
that printout will happen after buffering, so an untimely module unload could
make this address to string mapping unavailable for a few events. If we care
about getting this corner case right there are a few things we could do as
well.

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2022-02-09 20:51:08

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)


On 2/9/22 14:45, Namhyung Kim wrote:
> On Wed, Feb 9, 2022 at 11:28 AM Mathieu Desnoyers
> <[email protected]> wrote:
>> ----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:
>>> I'm also concerning dynamic allocated locks in a data structure.
>>> If we keep the info in a hash table, we should delete it when the
>>> lock is gone. I'm not sure we have a good place to hook it up all.
>> I was wondering about this use case as well. Can we make it mandatory to
>> declare the lock "class" (including the name) statically, even though the
>> lock per-se is allocated dynamically ? Then the initialization of the lock
>> embedded within the data structure would simply refer to the lock class
>> definition.
> Isn't it still the same if we have static lock classes that the entry needs
> to be deleted from the hash table when it frees the data structure?
> I'm more concerned about free than alloc as there seems to be no
> API to track that in a place.

We may have to invent some new APIs to do that. For example,
spin_lock_exit() can be the counterpart of spin_lock_init() and so on.
Of course, existing kernel code have to be modified to designate the
point after which a lock is no longer being used or is freed.

Cheers,
Longman


2022-02-09 23:18:38

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On 2/9/22 13:29, Mathieu Desnoyers wrote:
> ----- On Feb 9, 2022, at 1:19 PM, Waiman Long [email protected] wrote:
>
>> On 2/9/22 04:09, Peter Zijlstra wrote:
>>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>>>
>>>> Eventually I'm mostly interested in the contended locks only and I
>>>> want to reduce the overhead in the fast path. By moving that, it'd be
>>>> easy to track contended locks with timing by using two tracepoints.
>>> So why not put in two new tracepoints and call it a day?
>>>
>>> Why muck about with all that lockdep stuff just to preserve the name
>>> (and in the process continue to blow up data structures etc..). This
>>> leaves distros in a bind, will they enable this config and provide
>>> tracepoints while bloating the data structures and destroying things
>>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
>>> at all.
>>>
>>> Yes, the name is convenient, but it's just not worth it IMO. It makes
>>> the whole proposition too much of a trade-off.
>>>
>>> Would it not be possible to reconstruct enough useful information from
>>> the lock callsite?
>>>
>> I second that as I don't want to see the size of a spinlock exceeds 4
>> bytes in a production system.
>>
>> Instead of storing additional information (e.g. lock name) directly into
>> the lock itself. Maybe we can store it elsewhere and use the lock
>> address as the key to locate it in a hash table. We can certainly extend
>> the various lock init functions to do that. It will be trickier for
>> statically initialized locks, but we can probably find a way to do that too.
> If we go down that route, it would be nice if we can support a few different
> use-cases for various tracers out there.
>
> One use-case (a) requires the ability to query the lock name based on its address as key.
> For this a hash table is a good fit. This would allow tracers like ftrace to
> output lock names in its human-readable output which is formatted within the kernel.
>
> Another use-case (b) is to be able to "dump" the lock { name, address } tuples
> into the trace stream (we call this statedump events in lttng), and do the
> translation from address to name at post-processing. This simply requires
> that this information is available for iteration for both the core kernel
> and module locks, so the tracer can dump this information on trace start
> and module load.
>
> Use-case (b) is very similar to what is done for the kernel tracepoints. Based
> on this, implementing the init code that iterates on those sections and populates
> a hash table for use-case (a) should be easy enough.

Yes, that are good use cases for this type of functionality. I do need
to think about how to do it for statically initialized lock first.

Thanks,
Longman


2022-02-10 02:03:02

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Wed, Feb 9, 2022 at 12:17 PM Waiman Long <[email protected]> wrote:
>
>
> On 2/9/22 14:45, Namhyung Kim wrote:
> > On Wed, Feb 9, 2022 at 11:28 AM Mathieu Desnoyers
> > <[email protected]> wrote:
> >> ----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:
> >>> I'm also concerning dynamic allocated locks in a data structure.
> >>> If we keep the info in a hash table, we should delete it when the
> >>> lock is gone. I'm not sure we have a good place to hook it up all.
> >> I was wondering about this use case as well. Can we make it mandatory to
> >> declare the lock "class" (including the name) statically, even though the
> >> lock per-se is allocated dynamically ? Then the initialization of the lock
> >> embedded within the data structure would simply refer to the lock class
> >> definition.
> > Isn't it still the same if we have static lock classes that the entry needs
> > to be deleted from the hash table when it frees the data structure?
> > I'm more concerned about free than alloc as there seems to be no
> > API to track that in a place.
>
> We may have to invent some new APIs to do that. For example,
> spin_lock_exit() can be the counterpart of spin_lock_init() and so on.
> Of course, existing kernel code have to be modified to designate the
> point after which a lock is no longer being used or is freed.

Yeah, but I'm afraid that it could be easy to miss something.
Also it would add some runtime overhead due to maintaining
the hash table even if the tracepoints are not used.

Thanks,
Namhyung

2022-02-10 02:08:44

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
>
> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>
> > Eventually I'm mostly interested in the contended locks only and I
> > want to reduce the overhead in the fast path. By moving that, it'd be
> > easy to track contended locks with timing by using two tracepoints.
>
> So why not put in two new tracepoints and call it a day?
>
> Why muck about with all that lockdep stuff just to preserve the name
> (and in the process continue to blow up data structures etc..). This
> leaves distros in a bind, will they enable this config and provide
> tracepoints while bloating the data structures and destroying things
> like lockref (which relies on sizeof(spinlock_t)), or not provide this
> at all.

If it's only lockref, is it possible to change it to use arch_spinlock_t
so that it can remain in 4 bytes? It'd be really nice if we can keep
spin lock size, but it'd be easier to carry the name with it for
analysis IMHO.

Thanks,
Namhyung

2022-02-10 02:38:18

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)


On 2/9/22 19:27, Namhyung Kim wrote:
> On Wed, Feb 9, 2022 at 12:17 PM Waiman Long <[email protected]> wrote:
>>
>> On 2/9/22 14:45, Namhyung Kim wrote:
>>> On Wed, Feb 9, 2022 at 11:28 AM Mathieu Desnoyers
>>> <[email protected]> wrote:
>>>> ----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:
>>>>> I'm also concerning dynamic allocated locks in a data structure.
>>>>> If we keep the info in a hash table, we should delete it when the
>>>>> lock is gone. I'm not sure we have a good place to hook it up all.
>>>> I was wondering about this use case as well. Can we make it mandatory to
>>>> declare the lock "class" (including the name) statically, even though the
>>>> lock per-se is allocated dynamically ? Then the initialization of the lock
>>>> embedded within the data structure would simply refer to the lock class
>>>> definition.
>>> Isn't it still the same if we have static lock classes that the entry needs
>>> to be deleted from the hash table when it frees the data structure?
>>> I'm more concerned about free than alloc as there seems to be no
>>> API to track that in a place.
>> We may have to invent some new APIs to do that. For example,
>> spin_lock_exit() can be the counterpart of spin_lock_init() and so on.
>> Of course, existing kernel code have to be modified to designate the
>> point after which a lock is no longer being used or is freed.
> Yeah, but I'm afraid that it could be easy to miss something.
> Also it would add some runtime overhead due to maintaining
> the hash table even if the tracepoints are not used.

Yes, there is some overhead at lock init and exit time, but it is
generally negligible if the lock is used multiple times. The hash table
will consume some additional memory if configured, but it shouldn't be much.

Cheers,
Longman


2022-02-10 14:32:25

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Wed, Feb 09, 2022 at 04:32:58PM -0800, Namhyung Kim wrote:
> On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
> >
> > On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
> >
> > > Eventually I'm mostly interested in the contended locks only and I
> > > want to reduce the overhead in the fast path. By moving that, it'd be
> > > easy to track contended locks with timing by using two tracepoints.
> >
> > So why not put in two new tracepoints and call it a day?
> >
> > Why muck about with all that lockdep stuff just to preserve the name
> > (and in the process continue to blow up data structures etc..). This
> > leaves distros in a bind, will they enable this config and provide
> > tracepoints while bloating the data structures and destroying things
> > like lockref (which relies on sizeof(spinlock_t)), or not provide this
> > at all.
>
> If it's only lockref, is it possible to change it to use arch_spinlock_t
> so that it can remain in 4 bytes? It'd be really nice if we can keep
> spin lock size, but it'd be easier to carry the name with it for
> analysis IMHO.

It's just vile and disgusting to blow up the lock size for convenience
like this.

And no, there's more of that around. A lot of effort has been spend to
make sure spinlocks are 32bit and we're not going to give that up for
something as daft as this.

Just think harder on the analysis side. Like said; I'm thinking the
caller IP should be good enough most of the time.

2022-02-10 20:43:16

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Thu, Feb 10, 2022 at 10:13:53AM +0100, Peter Zijlstra wrote:
> On Wed, Feb 09, 2022 at 04:32:58PM -0800, Namhyung Kim wrote:
> > On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
> > >
> > > > Eventually I'm mostly interested in the contended locks only and I
> > > > want to reduce the overhead in the fast path. By moving that, it'd be
> > > > easy to track contended locks with timing by using two tracepoints.
> > >
> > > So why not put in two new tracepoints and call it a day?
> > >
> > > Why muck about with all that lockdep stuff just to preserve the name
> > > (and in the process continue to blow up data structures etc..). This
> > > leaves distros in a bind, will they enable this config and provide
> > > tracepoints while bloating the data structures and destroying things
> > > like lockref (which relies on sizeof(spinlock_t)), or not provide this
> > > at all.
> >
> > If it's only lockref, is it possible to change it to use arch_spinlock_t
> > so that it can remain in 4 bytes? It'd be really nice if we can keep
> > spin lock size, but it'd be easier to carry the name with it for
> > analysis IMHO.
>
> It's just vile and disgusting to blow up the lock size for convenience
> like this.
>
> And no, there's more of that around. A lot of effort has been spend to
> make sure spinlocks are 32bit and we're not going to give that up for
> something as daft as this.
>
> Just think harder on the analysis side. Like said; I'm thinking the
> caller IP should be good enough most of the time.

Another option is to keep any additional storage in a separate data
structure keyed off of lock address, lockdep class, or whatever.

Whether or not this is a -good- option, well, who knows? ;-)

Thanx, Paul

2022-02-10 20:43:24

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Thu, Feb 10, 2022 at 02:27:11PM -0500, Waiman Long wrote:
> On 2/10/22 14:14, Paul E. McKenney wrote:
> > On Thu, Feb 10, 2022 at 10:13:53AM +0100, Peter Zijlstra wrote:
> > > On Wed, Feb 09, 2022 at 04:32:58PM -0800, Namhyung Kim wrote:
> > > > On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
> > > > > On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
> > > > >
> > > > > > Eventually I'm mostly interested in the contended locks only and I
> > > > > > want to reduce the overhead in the fast path. By moving that, it'd be
> > > > > > easy to track contended locks with timing by using two tracepoints.
> > > > > So why not put in two new tracepoints and call it a day?
> > > > >
> > > > > Why muck about with all that lockdep stuff just to preserve the name
> > > > > (and in the process continue to blow up data structures etc..). This
> > > > > leaves distros in a bind, will they enable this config and provide
> > > > > tracepoints while bloating the data structures and destroying things
> > > > > like lockref (which relies on sizeof(spinlock_t)), or not provide this
> > > > > at all.
> > > > If it's only lockref, is it possible to change it to use arch_spinlock_t
> > > > so that it can remain in 4 bytes? It'd be really nice if we can keep
> > > > spin lock size, but it'd be easier to carry the name with it for
> > > > analysis IMHO.
> > > It's just vile and disgusting to blow up the lock size for convenience
> > > like this.
> > >
> > > And no, there's more of that around. A lot of effort has been spend to
> > > make sure spinlocks are 32bit and we're not going to give that up for
> > > something as daft as this.
> > >
> > > Just think harder on the analysis side. Like said; I'm thinking the
> > > caller IP should be good enough most of the time.
> >
> > Another option is to keep any additional storage in a separate data
> > structure keyed off of lock address, lockdep class, or whatever.
> >
> > Whether or not this is a -good- option, well, who knows? ;-)
>
> I have suggested that too. Unfortunately, I was replying to an email with
> your wrong email address. So you might not have received it.

Plus I was too lazy to go look at lore. ;-)

For whatever it is worth, we did something similar in DYNIX/ptx, whose
spinlocks were limited to a single byte. But it does have its drawbacks.

Thanx, Paul

2022-02-10 22:44:30

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Wed, Feb 09, 2022 at 03:17:38PM -0500, Waiman Long wrote:
>
> On 2/9/22 14:45, Namhyung Kim wrote:
> > On Wed, Feb 9, 2022 at 11:28 AM Mathieu Desnoyers
> > <[email protected]> wrote:
> > > ----- On Feb 9, 2022, at 2:22 PM, Namhyung Kim [email protected] wrote:
> > > > I'm also concerning dynamic allocated locks in a data structure.
> > > > If we keep the info in a hash table, we should delete it when the
> > > > lock is gone. I'm not sure we have a good place to hook it up all.
> > > I was wondering about this use case as well. Can we make it mandatory to
> > > declare the lock "class" (including the name) statically, even though the
> > > lock per-se is allocated dynamically ? Then the initialization of the lock
> > > embedded within the data structure would simply refer to the lock class
> > > definition.
> > Isn't it still the same if we have static lock classes that the entry needs
> > to be deleted from the hash table when it frees the data structure?
> > I'm more concerned about free than alloc as there seems to be no
> > API to track that in a place.
>
> We may have to invent some new APIs to do that. For example,
> spin_lock_exit() can be the counterpart of spin_lock_init() and so on. Of
> course, existing kernel code have to be modified to designate the point
> after which a lock is no longer being used or is freed.

The canonical name is _destroy(). We even have mutex_destroy() except
it's usage isn't mandatory.

The easy way out is doing as lockdep does and hook into the memory
allocators and check every free'd hunk of memory for a lock. It does
hoever mean your data structure of choice needs to be able to answer: do
I have an entry in @range. Which mostly disqualifies a hash-table.

Still, I really don't think you need any of this, it's just bloat. A
very limited stack unwind for one of the two tracepoints should allow
you to find the offending lock just fine.

2022-02-11 00:10:01

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On 2/10/22 14:14, Paul E. McKenney wrote:
> On Thu, Feb 10, 2022 at 10:13:53AM +0100, Peter Zijlstra wrote:
>> On Wed, Feb 09, 2022 at 04:32:58PM -0800, Namhyung Kim wrote:
>>> On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
>>>> On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
>>>>
>>>>> Eventually I'm mostly interested in the contended locks only and I
>>>>> want to reduce the overhead in the fast path. By moving that, it'd be
>>>>> easy to track contended locks with timing by using two tracepoints.
>>>> So why not put in two new tracepoints and call it a day?
>>>>
>>>> Why muck about with all that lockdep stuff just to preserve the name
>>>> (and in the process continue to blow up data structures etc..). This
>>>> leaves distros in a bind, will they enable this config and provide
>>>> tracepoints while bloating the data structures and destroying things
>>>> like lockref (which relies on sizeof(spinlock_t)), or not provide this
>>>> at all.
>>> If it's only lockref, is it possible to change it to use arch_spinlock_t
>>> so that it can remain in 4 bytes? It'd be really nice if we can keep
>>> spin lock size, but it'd be easier to carry the name with it for
>>> analysis IMHO.
>> It's just vile and disgusting to blow up the lock size for convenience
>> like this.
>>
>> And no, there's more of that around. A lot of effort has been spend to
>> make sure spinlocks are 32bit and we're not going to give that up for
>> something as daft as this.
>>
>> Just think harder on the analysis side. Like said; I'm thinking the
>> caller IP should be good enough most of the time.
> Another option is to keep any additional storage in a separate data
> structure keyed off of lock address, lockdep class, or whatever.
>
> Whether or not this is a -good- option, well, who knows? ;-)

I have suggested that too. Unfortunately, I was replying to an email
with your wrong email address. So you might not have received it.

Cheers,
Longman


2022-02-11 09:20:05

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Thu, Feb 10, 2022 at 1:14 AM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Feb 09, 2022 at 04:32:58PM -0800, Namhyung Kim wrote:
> > On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
> > >
> > > > Eventually I'm mostly interested in the contended locks only and I
> > > > want to reduce the overhead in the fast path. By moving that, it'd be
> > > > easy to track contended locks with timing by using two tracepoints.
> > >
> > > So why not put in two new tracepoints and call it a day?
> > >
> > > Why muck about with all that lockdep stuff just to preserve the name
> > > (and in the process continue to blow up data structures etc..). This
> > > leaves distros in a bind, will they enable this config and provide
> > > tracepoints while bloating the data structures and destroying things
> > > like lockref (which relies on sizeof(spinlock_t)), or not provide this
> > > at all.
> >
> > If it's only lockref, is it possible to change it to use arch_spinlock_t
> > so that it can remain in 4 bytes? It'd be really nice if we can keep
> > spin lock size, but it'd be easier to carry the name with it for
> > analysis IMHO.
>
> It's just vile and disgusting to blow up the lock size for convenience
> like this.
>
> And no, there's more of that around. A lot of effort has been spend to
> make sure spinlocks are 32bit and we're not going to give that up for
> something as daft as this.
>
> Just think harder on the analysis side. Like said; I'm thinking the
> caller IP should be good enough most of the time.

Ok, I'll go in this direction then.

So you are ok with adding two new tracepoints, even if they are
similar to what we already have in lockdep/lock_stat, right?

Thanks,
Namhyung

2022-02-11 11:37:16

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

On Thu, Feb 10, 2022 at 09:55:27PM -0800, Namhyung Kim wrote:

> So you are ok with adding two new tracepoints, even if they are
> similar to what we already have in lockdep/lock_stat, right?

Yeah, I don't think adding tracepoints to the slowpaths of the various
locks should be a problem.

2022-02-11 18:16:27

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC 00/12] locking: Separate lock tracepoints from lockdep/lock_stat (v1)

Hi Paul,

On Thu, Feb 10, 2022 at 12:10 PM Paul E. McKenney <[email protected]> wrote:
>
> On Thu, Feb 10, 2022 at 02:27:11PM -0500, Waiman Long wrote:
> > On 2/10/22 14:14, Paul E. McKenney wrote:
> > > On Thu, Feb 10, 2022 at 10:13:53AM +0100, Peter Zijlstra wrote:
> > > > On Wed, Feb 09, 2022 at 04:32:58PM -0800, Namhyung Kim wrote:
> > > > > On Wed, Feb 9, 2022 at 1:09 AM Peter Zijlstra <[email protected]> wrote:
> > > > > > On Tue, Feb 08, 2022 at 10:41:56AM -0800, Namhyung Kim wrote:
> > > > > >
> > > > > > > Eventually I'm mostly interested in the contended locks only and I
> > > > > > > want to reduce the overhead in the fast path. By moving that, it'd be
> > > > > > > easy to track contended locks with timing by using two tracepoints.
> > > > > > So why not put in two new tracepoints and call it a day?
> > > > > >
> > > > > > Why muck about with all that lockdep stuff just to preserve the name
> > > > > > (and in the process continue to blow up data structures etc..). This
> > > > > > leaves distros in a bind, will they enable this config and provide
> > > > > > tracepoints while bloating the data structures and destroying things
> > > > > > like lockref (which relies on sizeof(spinlock_t)), or not provide this
> > > > > > at all.
> > > > > If it's only lockref, is it possible to change it to use arch_spinlock_t
> > > > > so that it can remain in 4 bytes? It'd be really nice if we can keep
> > > > > spin lock size, but it'd be easier to carry the name with it for
> > > > > analysis IMHO.
> > > > It's just vile and disgusting to blow up the lock size for convenience
> > > > like this.
> > > >
> > > > And no, there's more of that around. A lot of effort has been spend to
> > > > make sure spinlocks are 32bit and we're not going to give that up for
> > > > something as daft as this.
> > > >
> > > > Just think harder on the analysis side. Like said; I'm thinking the
> > > > caller IP should be good enough most of the time.
> > >
> > > Another option is to keep any additional storage in a separate data
> > > structure keyed off of lock address, lockdep class, or whatever.
> > >
> > > Whether or not this is a -good- option, well, who knows? ;-)
> >
> > I have suggested that too. Unfortunately, I was replying to an email with
> > your wrong email address. So you might not have received it.
>
> Plus I was too lazy to go look at lore. ;-)

Sorry for the noise about the email address in the first place.
It has been so long since the last time I sent you a patch..

Thanks,
Namhyung