2019-07-15 19:27:54

by Alex Kogan

[permalink] [raw]
Subject: [PATCH v3 2/5] locking/qspinlock: Refactor the qspinlock slow path

Move some of the code manipulating the spin lock into separate functions.
This would allow easier integration of alternative ways to manipulate
that lock.

Signed-off-by: Alex Kogan <[email protected]>
Reviewed-by: Steve Sistare <[email protected]>
---
kernel/locking/qspinlock.c | 40 ++++++++++++++++++++++++++++++++++++++--
1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 961781624638..5668466b3006 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -297,6 +297,36 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock,
#define queued_spin_lock_slowpath native_queued_spin_lock_slowpath
#endif

+/*
+ * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL,
+ * and by doing that unlock the MCS lock when its waiting queue is empty
+ * @lock: Pointer to queued spinlock structure
+ * @val: Current value of the lock
+ * @node: Pointer to the MCS node of the lock holder
+ *
+ * *,*,* -> 0,0,1
+ */
+static __always_inline bool __set_locked_empty_mcs(struct qspinlock *lock,
+ u32 val,
+ struct mcs_spinlock *node)
+{
+ return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL);
+}
+
+/*
+ * pass_mcs_lock - pass the MCS lock to the next waiter
+ * @node: Pointer to the MCS node of the lock holder
+ * @next: Pointer to the MCS node of the first waiter in the MCS queue
+ */
+static __always_inline void __pass_mcs_lock(struct mcs_spinlock *node,
+ struct mcs_spinlock *next)
+{
+ arch_mcs_spin_unlock_contended(&next->locked, 1);
+}
+
+#define set_locked_empty_mcs __set_locked_empty_mcs
+#define pass_mcs_lock __pass_mcs_lock
+
#endif /* _GEN_PV_LOCK_SLOWPATH */

/**
@@ -541,7 +571,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* PENDING will make the uncontended transition fail.
*/
if ((val & _Q_TAIL_MASK) == tail) {
- if (atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL))
+ if (set_locked_empty_mcs(lock, val, node))
goto release; /* No contention */
}

@@ -558,7 +588,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (!next)
next = smp_cond_load_relaxed(&node->next, (VAL));

- arch_mcs_spin_unlock_contended(&next->locked, 1);
+ pass_mcs_lock(node, next);
pv_kick_node(lock, next);

release:
@@ -583,6 +613,12 @@ EXPORT_SYMBOL(queued_spin_lock_slowpath);
#undef pv_kick_node
#undef pv_wait_head_or_lock

+#undef set_locked_empty_mcs
+#define set_locked_empty_mcs __set_locked_empty_mcs
+
+#undef pass_mcs_lock
+#define pass_mcs_lock __pass_mcs_lock
+
#undef queued_spin_lock_slowpath
#define queued_spin_lock_slowpath __pv_queued_spin_lock_slowpath

--
2.11.0 (Apple Git-81)


2019-07-16 10:23:44

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v3 2/5] locking/qspinlock: Refactor the qspinlock slow path

On Mon, Jul 15, 2019 at 03:25:33PM -0400, Alex Kogan wrote:

> +/*
> + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL,
> + * and by doing that unlock the MCS lock when its waiting queue is empty
> + * @lock: Pointer to queued spinlock structure
> + * @val: Current value of the lock
> + * @node: Pointer to the MCS node of the lock holder
> + *
> + * *,*,* -> 0,0,1
> + */
> +static __always_inline bool __set_locked_empty_mcs(struct qspinlock *lock,
> + u32 val,
> + struct mcs_spinlock *node)
> +{
> + return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL);
> +}

That name is nonsense. It should be something like:

static __always_inline bool __try_clear_tail(...)


> +/*
> + * pass_mcs_lock - pass the MCS lock to the next waiter
> + * @node: Pointer to the MCS node of the lock holder
> + * @next: Pointer to the MCS node of the first waiter in the MCS queue
> + */
> +static __always_inline void __pass_mcs_lock(struct mcs_spinlock *node,
> + struct mcs_spinlock *next)
> +{
> + arch_mcs_spin_unlock_contended(&next->locked, 1);
> +}

I'm not entirely happy with that name either; but it's not horrible like
the other one. Why not mcs_spin_unlock_contended() ?

2019-07-16 14:55:55

by Alex Kogan

[permalink] [raw]
Subject: Re: [PATCH v3 2/5] locking/qspinlock: Refactor the qspinlock slow path

On Jul 16, 2019, at 6:20 AM, Peter Zijlstra <[email protected]> wrote:
>
> On Mon, Jul 15, 2019 at 03:25:33PM -0400, Alex Kogan wrote:
>
>> +/*
>> + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL,
>> + * and by doing that unlock the MCS lock when its waiting queue is empty
>> + * @lock: Pointer to queued spinlock structure
>> + * @val: Current value of the lock
>> + * @node: Pointer to the MCS node of the lock holder
>> + *
>> + * *,*,* -> 0,0,1
>> + */
>> +static __always_inline bool __set_locked_empty_mcs(struct qspinlock *lock,
>> + u32 val,
>> + struct mcs_spinlock *node)
>> +{
>> + return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL);
>> +}
>
> That name is nonsense. It should be something like:
>
> static __always_inline bool __try_clear_tail(…)

We already have set_locked(), so I was trying to convey the fact that we are
doing the same here, but only when the MCS chain is empty.

I can use __try_clear_tail() instead.

>
>
>> +/*
>> + * pass_mcs_lock - pass the MCS lock to the next waiter
>> + * @node: Pointer to the MCS node of the lock holder
>> + * @next: Pointer to the MCS node of the first waiter in the MCS queue
>> + */
>> +static __always_inline void __pass_mcs_lock(struct mcs_spinlock *node,
>> + struct mcs_spinlock *next)
>> +{
>> + arch_mcs_spin_unlock_contended(&next->locked, 1);
>> +}
>
> I'm not entirely happy with that name either; but it's not horrible like
> the other one. Why not mcs_spin_unlock_contended() ?

Sure, I can use mcs_spin_unlock_contended() instead.

Thanks,
— Alex

2019-07-16 15:59:54

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v3 2/5] locking/qspinlock: Refactor the qspinlock slow path

On Tue, Jul 16, 2019 at 10:53:02AM -0400, Alex Kogan wrote:
> On Jul 16, 2019, at 6:20 AM, Peter Zijlstra <[email protected]> wrote:
> >
> > On Mon, Jul 15, 2019 at 03:25:33PM -0400, Alex Kogan wrote:
> >
> >> +/*
> >> + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL,
> >> + * and by doing that unlock the MCS lock when its waiting queue is empty
> >> + * @lock: Pointer to queued spinlock structure
> >> + * @val: Current value of the lock
> >> + * @node: Pointer to the MCS node of the lock holder
> >> + *
> >> + * *,*,* -> 0,0,1
> >> + */
> >> +static __always_inline bool __set_locked_empty_mcs(struct qspinlock *lock,
> >> + u32 val,
> >> + struct mcs_spinlock *node)
> >> +{
> >> + return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL);
> >> +}
> >
> > That name is nonsense. It should be something like:
> >
> > static __always_inline bool __try_clear_tail(…)
>
> We already have set_locked(), so I was trying to convey the fact that we are
> doing the same here, but only when the MCS chain is empty.
>
> I can use __try_clear_tail() instead.

Thing is, we go into this function with: *,0,1 and are trying to obtain
0,0,1. IOW, we're trying to clear the tail, while preserving pending and
locked.