Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758053AbcLAOHH (ORCPT ); Thu, 1 Dec 2016 09:07:07 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33843 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757687AbcLAOHE (ORCPT ); Thu, 1 Dec 2016 09:07:04 -0500 From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= , Peter Zijlstra , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org Subject: [PATCH v2 02/11] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop Date: Thu, 1 Dec 2016 15:06:45 +0100 Message-Id: <1480601214-26583-3-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4063 Lines: 133 From: Nicolai Hähnle In the following scenario, thread #1 should back off its attempt to lock ww1 and unlock ww2 (assuming the acquire context stamps are ordered accordingly). Thread #0 Thread #1 --------- --------- successfully lock ww2 set ww1->base.owner attempt to lock ww1 confirm ww1->ctx == NULL enter mutex_spin_on_owner set ww1->ctx What was likely to happen previously is: attempt to lock ww2 refuse to spin because ww2->ctx != NULL schedule() detect thread #0 is off CPU stop optimistic spin return -EDEADLK unlock ww2 wakeup thread #0 lock ww2 Now, we are more likely to see: detect ww1->ctx != NULL stop optimistic spin return -EDEADLK unlock ww2 successfully lock ww2 ... because thread #1 will stop its optimistic spin as soon as possible. The whole scenario is quite unlikely, since it requires thread #1 to get between thread #0 setting the owner and setting the ctx. But since we're idling here anyway, the additional check is basically free. Found by inspection. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Maarten Lankhorst Cc: Daniel Vetter Cc: Chris Wilson Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle --- kernel/locking/mutex.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 9b34961..0afa998 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -350,7 +350,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock, * access and not reliable. */ static noinline -bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, + bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) { bool ret = true; @@ -373,6 +374,28 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) break; } + if (use_ww_ctx && ww_ctx->acquired > 0) { + struct ww_mutex *ww; + + ww = container_of(lock, struct ww_mutex, base); + + /* + * If ww->ctx is set the contents are undefined, only + * by acquiring wait_lock there is a guarantee that + * they are not invalid when reading. + * + * As such, when deadlock detection needs to be + * performed the optimistic spinning cannot be done. + * + * Check this in every inner iteration because we may + * be racing against another thread's ww_mutex_lock. + */ + if (READ_ONCE(ww->ctx)) { + ret = false; + break; + } + } + cpu_relax(); } rcu_read_unlock(); @@ -460,22 +483,6 @@ static bool mutex_optimistic_spin(struct mutex *lock, for (;;) { struct task_struct *owner; - if (use_ww_ctx && ww_ctx->acquired > 0) { - struct ww_mutex *ww; - - ww = container_of(lock, struct ww_mutex, base); - /* - * If ww->ctx is set the contents are undefined, only - * by acquiring wait_lock there is a guarantee that - * they are not invalid when reading. - * - * As such, when deadlock detection needs to be - * performed the optimistic spinning cannot be done. - */ - if (READ_ONCE(ww->ctx)) - goto fail_unlock; - } - /* * If there's an owner, wait for it to either * release the lock or go to sleep. @@ -487,7 +494,8 @@ static bool mutex_optimistic_spin(struct mutex *lock, break; } - if (!mutex_spin_on_owner(lock, owner)) + if (!mutex_spin_on_owner(lock, owner, use_ww_ctx, + ww_ctx)) goto fail_unlock; } -- 2.7.4