Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935772AbcKWLZz (ORCPT ); Wed, 23 Nov 2016 06:25:55 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:33971 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934939AbcKWLZv (ORCPT ); Wed, 23 Nov 2016 06:25:51 -0500 From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= , Peter Zijlstra , Ingo Molnar , Chris Wilson , Maarten Lankhorst , dri-devel@lists.freedesktop.org, =?UTF-8?q?Nicolai=20H=C3=A4hnle?= Subject: [PATCH 2/4] locking/ww_mutex: Remove redundant wakeups in ww_mutex_set_context_slowpath Date: Wed, 23 Nov 2016 12:25:23 +0100 Message-Id: <1479900325-28358-2-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1479900325-28358-1-git-send-email-nhaehnle@gmail.com> References: <1479900325-28358-1-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2261 Lines: 70 From: Nicolai Hähnle When ww_mutex_set_context_slowpath runs, we are in one of two situations: 1. The current task was woken up by ww_mutex_unlock. 2. The current task is racing with ww_mutex_unlock: We entered the slow path while lock->base.count <= 0, but skipped the wait in __mutex_lock_common. In both cases, all tasks that are currently contending for the lock are either racing as in point 2 and blocked on lock->wait_lock, or they have been or will be woken up by ww_mutex_unlock. Either way, they will check their stamp against ours without having to be woken up again. This improvement is possible only after the changed behavior of ww_mutex_unlock from the previous patch ("locking/ww_mutex: Fix a deadlock affecting ww_mutexes"). Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Chris Wilson Cc: Maarten Lankhorst Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle --- kernel/locking/mutex.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 7fbf9b4..7c09376 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -192,8 +192,10 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, } /* - * After acquiring lock in the slowpath set ctx and wake up any - * waiters so they can recheck. + * After acquiring lock in the slowpath set ctx. + * + * Unlike the fast path, other waiters are already woken up by ww_mutex_unlock, + * so we don't have to do it again here. * * Callers must hold the mutex wait_lock. */ @@ -201,19 +203,8 @@ static __always_inline void ww_mutex_set_context_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { - struct mutex_waiter *cur; - ww_mutex_lock_acquired(lock, ctx); lock->ctx = ctx; - - /* - * Give any possible sleeping processes the chance to wake up, - * so they can recheck if they have to back off. - */ - list_for_each_entry(cur, &lock->base.wait_list, list) { - debug_mutex_wake_waiter(&lock->base, cur); - wake_up_process(cur->task); - } } #ifdef CONFIG_MUTEX_SPIN_ON_OWNER -- 2.7.4