Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753998AbcLFQ4A (ORCPT ); Tue, 6 Dec 2016 11:56:00 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:32956 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753904AbcLFQz4 (ORCPT ); Tue, 6 Dec 2016 11:55:56 -0500 Date: Tue, 6 Dec 2016 17:55:44 +0100 From: Peter Zijlstra To: Nicolai =?iso-8859-1?Q?H=E4hnle?= Cc: linux-kernel@vger.kernel.org, Nicolai =?iso-8859-1?Q?H=E4hnle?= , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order Message-ID: <20161206165544.GX3045@worktop.programming.kicks-ass.net> References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> <1480601214-26583-6-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1480601214-26583-6-git-send-email-nhaehnle@gmail.com> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2289 Lines: 61 On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai H?hnle wrote: > @@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > * mutex_unlock() handing the lock off to us, do a trylock > * before testing the error conditions to make sure we pick up > * the handoff. > + * > + * For w/w locks, we always need to do this even if we're not > + * currently the first waiter, because we may have been the > + * first waiter during the unlock. > */ > - if (__mutex_trylock(lock, first)) > + if (__mutex_trylock(lock, use_ww_ctx || first)) > goto acquired; So I'm somewhat uncomfortable with this. The point is that with the .handoff logic it is very easy to accidentally allow: mutex_lock(&a); mutex_lock(&a); And I'm not sure this doesn't make that happen for ww_mutexes. We get to this __mutex_trylock() without first having blocked. > /* > @@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > spin_unlock_mutex(&lock->wait_lock, flags); > schedule_preempt_disabled(); > > - if (!first && __mutex_waiter_is_first(lock, &waiter)) { > + if (use_ww_ctx && ww_ctx) { > + /* > + * Always re-check whether we're in first position. We > + * don't want to spin if another task with a lower > + * stamp has taken our position. > + * > + * We also may have to set the handoff flag again, if > + * our position at the head was temporarily taken away. > + */ > + first = __mutex_waiter_is_first(lock, &waiter); > + > + if (first) > + __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); > + } else if (!first && __mutex_waiter_is_first(lock, &waiter)) { > first = true; > __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); > } So the point is that !ww_ctx entries are 'skipped' during the insertion and therefore, if one becomes first, it must stay first? > @@ -728,7 +800,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > * or we must see its unlock and acquire. > */ > if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) || > - __mutex_trylock(lock, first)) > + __mutex_trylock(lock, use_ww_ctx || first)) > break; > > spin_lock_mutex(&lock->wait_lock, flags);