Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755255AbcJMP2S (ORCPT ); Thu, 13 Oct 2016 11:28:18 -0400 Received: from foss.arm.com ([217.140.101.70]:51588 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754693AbcJMP2C (ORCPT ); Thu, 13 Oct 2016 11:28:02 -0400 Date: Thu, 13 Oct 2016 16:28:01 +0100 From: Will Deacon To: Peter Zijlstra Cc: Linus Torvalds , Waiman Long , Jason Low , Ding Tianhong , Thomas Gleixner , Ingo Molnar , Imre Deak , Linux Kernel Mailing List , Davidlohr Bueso , Tim Chen , Terry Rudd , "Paul E. McKenney" , Jason Low , Chris Wilson , Daniel Vetter Subject: Re: [PATCH -v4 8/8] locking/mutex: Enable optimistic spinning of woken waiter Message-ID: <20161013152801.GD13138@arm.com> References: <20161007145243.361481786@infradead.org> <20161007150211.416377482@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161007150211.416377482@infradead.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3735 Lines: 102 On Fri, Oct 07, 2016 at 04:52:51PM +0200, Peter Zijlstra wrote: > From: Waiman Long > > This patch makes the waiter that sets the HANDOFF flag start spinning > instead of sleeping until the handoff is complete or the owner > sleeps. Otherwise, the handoff will cause the optimistic spinners to > abort spinning as the handed-off owner may not be running. > > Cc: Ingo Molnar > Cc: Linus Torvalds > Cc: Tim Chen > Cc: Thomas Gleixner > Cc: Imre Deak > Cc: Jason Low > Cc: "Paul E. McKenney" > Cc: Ding Tianhong > Cc: Davidlohr Bueso > Cc: Will Deacon > Signed-off-by: Waiman Long > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/locking/mutex.c | 77 ++++++++++++++++++++++++++++++++++--------------- > 1 file changed, 54 insertions(+), 23 deletions(-) > > --- a/kernel/locking/mutex.c > +++ b/kernel/locking/mutex.c > @@ -416,24 +416,39 @@ static inline int mutex_can_spin_on_owne > * > * Returns true when the lock was taken, otherwise false, indicating > * that we need to jump to the slowpath and sleep. > + * > + * The waiter flag is set to true if the spinner is a waiter in the wait > + * queue. The waiter-spinner will spin on the lock directly and concurrently > + * with the spinner at the head of the OSQ, if present, until the owner is > + * changed to itself. > */ > static bool mutex_optimistic_spin(struct mutex *lock, > - struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) > + struct ww_acquire_ctx *ww_ctx, > + const bool use_ww_ctx, const bool waiter) > { > struct task_struct *task = current; > > - if (!mutex_can_spin_on_owner(lock)) > - goto done; > + if (!waiter) { > + /* > + * The purpose of the mutex_can_spin_on_owner() function is > + * to eliminate the overhead of osq_lock() and osq_unlock() > + * in case spinning isn't possible. As a waiter-spinner > + * is not going to take OSQ lock anyway, there is no need > + * to call mutex_can_spin_on_owner(). > + */ > + if (!mutex_can_spin_on_owner(lock)) > + goto fail; > > - /* > - * In order to avoid a stampede of mutex spinners trying to > - * acquire the mutex all at once, the spinners need to take a > - * MCS (queued) lock first before spinning on the owner field. > - */ > - if (!osq_lock(&lock->osq)) > - goto done; > + /* > + * In order to avoid a stampede of mutex spinners trying to > + * acquire the mutex all at once, the spinners need to take a > + * MCS (queued) lock first before spinning on the owner field. > + */ > + if (!osq_lock(&lock->osq)) > + goto fail; > + } > > - while (true) { > + for (;;) { > struct task_struct *owner; > > if (use_ww_ctx && ww_ctx->acquired > 0) { > @@ -449,7 +464,7 @@ static bool mutex_optimistic_spin(struct > * performed the optimistic spinning cannot be done. > */ > if (READ_ONCE(ww->ctx)) > - break; > + goto fail_unlock; > } > > /* > @@ -457,15 +472,20 @@ static bool mutex_optimistic_spin(struct > * release the lock or go to sleep. > */ > owner = __mutex_owner(lock); > - if (owner && !mutex_spin_on_owner(lock, owner)) > - break; > + if (owner) { > + if (waiter && owner == task) { > + smp_mb(); /* ACQUIRE */ Hmm, is this barrier actually needed? This only happens on the handoff path, and we take the wait_lock immediately after this succeeds anyway. That control dependency, coupled with the acquire semantics of the spin_lock, should be sufficient, no? Will