When examining a contended spinlock in a crash dump, we can trace out the
list of lock waiter CPUs waiting for the lock by following the linked
list of mcs_spinlock structures. However, the actual owner of the lock
is not there making it hard to figure out who the current lock owner is.
Make it easier to figure out this information by saving the lock owner
CPU into the mcs_spinlock structure of new MCS lock owner, if available,
when acquiring the lock in the qspinlock slowpath. We can then follow
the linked list of mcs_spinlock structures to the end to get an encoded
CPU number of the lock owner, if set.
This owner information is still not available when the lock is acquired
directly in the fast path or in the pending code path. There is no easy
way around that.
The additional cost to get the current CPU number in the slowpath should
be minimal as it should be in a hot cacheline.
Signed-off-by: Waiman Long <[email protected]>
---
kernel/locking/mcs_spinlock.h | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index 85251d8771d9..ac0ed0a8f028 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -13,11 +13,17 @@
#ifndef __LINUX_MCS_SPINLOCK_H
#define __LINUX_MCS_SPINLOCK_H
+/*
+ * Save an encoded version of the current MCS lock owner CPU to the
+ * mcs_spinlock structure of the next lock owner.
+ */
+#define MCS_LOCKED (smp_processor_id() + 1)
+
#include <asm/mcs_spinlock.h>
struct mcs_spinlock {
struct mcs_spinlock *next;
- int locked; /* 1 if lock acquired */
+ int locked; /* non-zero if lock acquired */
int count; /* nesting count, see qspinlock.c */
};
@@ -42,7 +48,7 @@ do { \
* unlocking.
*/
#define arch_mcs_spin_unlock_contended(l) \
- smp_store_release((l), 1)
+ smp_store_release((l), MCS_LOCKED)
#endif
/*
--
2.39.3
On 5/3/24 17:59, Waiman Long wrote:
> When examining a contended spinlock in a crash dump, we can trace out the
> list of lock waiter CPUs waiting for the lock by following the linked
> list of mcs_spinlock structures. However, the actual owner of the lock
> is not there making it hard to figure out who the current lock owner is.
>
> Make it easier to figure out this information by saving the lock owner
> CPU into the mcs_spinlock structure of new MCS lock owner, if available,
> when acquiring the lock in the qspinlock slowpath. We can then follow
> the linked list of mcs_spinlock structures to the end to get an encoded
> CPU number of the lock owner, if set.
>
> This owner information is still not available when the lock is acquired
> directly in the fast path or in the pending code path. There is no easy
> way around that.
>
> The additional cost to get the current CPU number in the slowpath should
> be minimal as it should be in a hot cacheline.
>
> Signed-off-by: Waiman Long <[email protected]>
Oh, I forgot that the mcs_spinlock has no backward information. Please
ignore this patch and will send an updated one later.
Cheers,
Longman
> ---
> kernel/locking/mcs_spinlock.h | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> index 85251d8771d9..ac0ed0a8f028 100644
> --- a/kernel/locking/mcs_spinlock.h
> +++ b/kernel/locking/mcs_spinlock.h
> @@ -13,11 +13,17 @@
> #ifndef __LINUX_MCS_SPINLOCK_H
> #define __LINUX_MCS_SPINLOCK_H
>
> +/*
> + * Save an encoded version of the current MCS lock owner CPU to the
> + * mcs_spinlock structure of the next lock owner.
> + */
> +#define MCS_LOCKED (smp_processor_id() + 1)
> +
> #include <asm/mcs_spinlock.h>
>
> struct mcs_spinlock {
> struct mcs_spinlock *next;
> - int locked; /* 1 if lock acquired */
> + int locked; /* non-zero if lock acquired */
> int count; /* nesting count, see qspinlock.c */
> };
>
> @@ -42,7 +48,7 @@ do { \
> * unlocking.
> */
> #define arch_mcs_spin_unlock_contended(l) \
> - smp_store_release((l), 1)
> + smp_store_release((l), MCS_LOCKED)
> #endif
>
> /*