Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936581AbcLUSqy (ORCPT ); Wed, 21 Dec 2016 13:46:54 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:34868 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758498AbcLUSqv (ORCPT ); Wed, 21 Dec 2016 13:46:51 -0500 From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Cc: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= , Peter Zijlstra , Ingo Molnar , dri-devel@lists.freedesktop.org, =?UTF-8?q?Nicolai=20H=C3=A4hnle?= Subject: [PATCH v3 02/12] locking/mutex: Fix a race with handoffs and interruptible waits Date: Wed, 21 Dec 2016 19:46:30 +0100 Message-Id: <1482346000-9927-3-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1482346000-9927-1-git-send-email-nhaehnle@gmail.com> References: <1482346000-9927-1-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1635 Lines: 49 From: Nicolai Hähnle There's a possible race where the waiter in front of us leaves the wait list due to a signal, and the current owner subsequently hands the lock off to us even though we never observed ourselves at the front of the list. Set the task state before checking our position in the list, so that the race is handled by falling through the next schedule(). Found by inspection. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle --- kernel/locking/mutex.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 9b34961..c02c566 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -697,17 +697,18 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, spin_unlock_mutex(&lock->wait_lock, flags); schedule_preempt_disabled(); - if (!first && __mutex_waiter_is_first(lock, &waiter)) { - first = true; - __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); - } - set_task_state(task, state); /* * Here we order against unlock; we must either see it change * state back to RUNNING and fall through the next schedule(), * or we must see its unlock and acquire. */ + + if (!first && __mutex_waiter_is_first(lock, &waiter)) { + first = true; + __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); + } + if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) || __mutex_trylock(lock, first)) break; -- 2.7.4