2014-02-10 20:43:34

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH 3/8] mutex: Modify the way optimistic spinners are queued

The mutex->spin_mlock was introduced in order to ensure that only 1 thread
spins for lock acquisition at a time to reduce cache line contention. When
lock->owner is NULL and the lock->count is still not 1, the spinner(s) will
continually release and obtain the lock->spin_mlock. This can generate
quite a bit of overhead/contention, and also might just delay the spinner
from getting the lock.

This patch modifies the way optimistic spinners are queued by queuing before
entering the optimistic spinning loop as oppose to acquiring before every
call to mutex_spin_on_owner(). So in situations where the spinner requires
a few extra spins before obtaining the lock, then there will only be 1 spinner
trying to get the lock and it will avoid the overhead from unnecessarily
unlocking and locking the spin_mlock.

Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Jason Low <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
---
kernel/locking/mutex.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)

--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -403,9 +403,9 @@ __mutex_lock_common(struct mutex *lock,
if (!mutex_can_spin_on_owner(lock))
goto slowpath;

+ mcs_spin_lock(&lock->mcs_lock);
for (;;) {
struct task_struct *owner;
- struct mcs_spinlock node;

if (use_ww_ctx && ww_ctx->acquired > 0) {
struct ww_mutex *ww;
@@ -420,19 +420,16 @@ __mutex_lock_common(struct mutex *lock,
* performed the optimistic spinning cannot be done.
*/
if (ACCESS_ONCE(ww->ctx))
- goto slowpath;
+ break;
}

/*
* If there's an owner, wait for it to either
* release the lock or go to sleep.
*/
- mcs_spin_lock(&lock->mcs_lock, &node);
owner = ACCESS_ONCE(lock->owner);
- if (owner && !mutex_spin_on_owner(lock, owner)) {
- mcs_spin_unlock(&lock->mcs_lock, &node);
- goto slowpath;
- }
+ if (owner && !mutex_spin_on_owner(lock, owner))
+ break;

if ((atomic_read(&lock->count) == 1) &&
(atomic_cmpxchg(&lock->count, 1, 0) == 1)) {
@@ -445,11 +442,10 @@ __mutex_lock_common(struct mutex *lock,
}

mutex_set_owner(lock);
- mcs_spin_unlock(&lock->mcs_lock, &node);
+ mcs_spin_unlock(&lock->mcs_lock);
preempt_enable();
return 0;
}
- mcs_spin_unlock(&lock->mcs_lock, &node);

/*
* When there's no owner, we might have preempted between the
@@ -458,7 +454,7 @@ __mutex_lock_common(struct mutex *lock,
* the owner complete.
*/
if (!owner && (need_resched() || rt_task(task)))
- goto slowpath;
+ break;

/*
* The cpu_relax() call is a compiler barrier which forces
@@ -468,6 +464,7 @@ __mutex_lock_common(struct mutex *lock,
*/
arch_mutex_cpu_relax();
}
+ mcs_spin_unlock(&lock->mcs_lock);
slowpath:
#endif
spin_lock_mutex(&lock->wait_lock, flags);


2014-02-11 01:33:28

by Jason Low

[permalink] [raw]
Subject: Re: [PATCH 3/8] mutex: Modify the way optimistic spinners are queued

On Mon, 2014-02-10 at 20:58 +0100, Peter Zijlstra wrote:
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Jason Low <[email protected]>
> Signed-off-by: Peter Zijlstra <[email protected]>
> Link: http://lkml.kernel.org/r/[email protected]
> ---
> kernel/locking/mutex.c | 17 +++++++----------
> 1 file changed, 7 insertions(+), 10 deletions(-)
>
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -403,9 +403,9 @@ __mutex_lock_common(struct mutex *lock,
> if (!mutex_can_spin_on_owner(lock))
> goto slowpath;
>
> + mcs_spin_lock(&lock->mcs_lock);

Where did the mcs node go? :)

2014-02-11 07:21:12

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/8] mutex: Modify the way optimistic spinners are queued

On Mon, Feb 10, 2014 at 05:33:18PM -0800, Jason Low wrote:

> > + mcs_spin_lock(&lock->mcs_lock);
>
> Where did the mcs node go? :)

Bugger, that's what I get for not compiling each patch in the series..
:-/