Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758500AbcLAOHP (ORCPT ); Thu, 1 Dec 2016 09:07:15 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:35741 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758128AbcLAOHL (ORCPT ); Thu, 1 Dec 2016 09:07:11 -0500 From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= , Peter Zijlstra , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org Subject: [PATCH v2 08/11] locking/ww_mutex: Yield to other waiters from optimistic spin Date: Thu, 1 Dec 2016 15:06:51 +0100 Message-Id: <1480601214-26583-9-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4888 Lines: 150 From: Nicolai Hähnle Lock stealing is less beneficial for w/w mutexes since we may just end up backing off if we stole from a thread with an earlier acquire stamp that already holds another w/w mutex that we also need. So don't spin optimistically unless we are sure that there is no other waiter that might cause us to back off. Median timings taken of a contention-heavy GPU workload: Before: real 0m52.946s user 0m7.272s sys 1m55.964s After: real 0m53.086s user 0m7.360s sys 1m46.204s This particular workload still spends 20%-25% of CPU in mutex_spin_on_owner according to perf, but my attempts to further reduce this spinning based on various heuristics all lead to an increase in measured wall time despite the decrease in sys time. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Maarten Lankhorst Cc: Daniel Vetter Cc: Chris Wilson Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle --- kernel/locking/mutex.c | 48 ++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 10 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index d2ca447..296605c 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -374,7 +374,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock, */ static noinline bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, - bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) + bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx, + struct mutex_waiter *waiter) { bool ret = true; @@ -397,7 +398,7 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, break; } - if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) { + if (use_ww_ctx && ww_ctx) { struct ww_mutex *ww; ww = container_of(lock, struct ww_mutex, base); @@ -413,7 +414,30 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, * Check this in every inner iteration because we may * be racing against another thread's ww_mutex_lock. */ - if (READ_ONCE(ww->ctx)) { + if (ww_ctx->acquired > 0 && READ_ONCE(ww->ctx)) { + ret = false; + break; + } + + /* + * If we aren't on the wait list yet, cancel the spin + * if there are waiters. We want to avoid stealing the + * lock from a waiter with an earlier stamp, since the + * other thread may already own a lock that we also + * need. + */ + if (!waiter && + (atomic_long_read(&lock->owner) & + MUTEX_FLAG_WAITERS)) { + ret = false; + break; + } + + /* + * Similarly, stop spinning if we are no longer the + * first waiter. + */ + if (waiter && !__mutex_waiter_is_first(lock, waiter)) { ret = false; break; } @@ -479,7 +503,8 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) */ static bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, - const bool use_ww_ctx, const bool waiter) + const bool use_ww_ctx, + struct mutex_waiter *waiter) { struct task_struct *task = current; @@ -518,12 +543,12 @@ static bool mutex_optimistic_spin(struct mutex *lock, } if (!mutex_spin_on_owner(lock, owner, use_ww_ctx, - ww_ctx)) + ww_ctx, waiter)) goto fail_unlock; } /* Try to acquire the mutex if it is unlocked. */ - if (__mutex_trylock(lock, waiter)) + if (__mutex_trylock(lock, waiter != NULL)) break; /* @@ -565,7 +590,8 @@ static bool mutex_optimistic_spin(struct mutex *lock, #else static bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, - const bool use_ww_ctx, const bool waiter) + const bool use_ww_ctx, + struct mutex_waiter *waiter) { return false; } @@ -736,7 +762,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); if (__mutex_trylock(lock, false) || - mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, false)) { + mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) { /* got the lock, yay! */ lock_acquired(&lock->dep_map, ip); if (use_ww_ctx && ww_ctx) { @@ -841,8 +867,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * state back to RUNNING and fall through the next schedule(), * or we must see its unlock and acquire. */ - if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) || - __mutex_trylock(lock, use_ww_ctx || first)) + if ((first && + mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, + &waiter)) || + __mutex_trylock(lock, use_ww_ctx || first)) break; spin_lock_mutex(&lock->wait_lock, flags); -- 2.7.4