Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756453AbcLPSLw (ORCPT ); Fri, 16 Dec 2016 13:11:52 -0500 Received: from mail-wj0-f196.google.com ([209.85.210.196]:34791 "EHLO mail-wj0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751334AbcLPSLp (ORCPT ); Fri, 16 Dec 2016 13:11:45 -0500 Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order To: Peter Zijlstra References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> <1480601214-26583-6-git-send-email-nhaehnle@gmail.com> <20161206165544.GX3045@worktop.programming.kicks-ass.net> <20161216171524.GU3107@twins.programming.kicks-ass.net> Cc: linux-kernel@vger.kernel.org, =?UTF-8?Q?Nicolai_H=c3=a4hnle?= , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org From: =?UTF-8?Q?Nicolai_H=c3=a4hnle?= Message-ID: <98cfeb6e-f312-ba13-00b4-f5b125b24f8d@gmail.com> Date: Fri, 16 Dec 2016 19:11:41 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: <20161216171524.GU3107@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2058 Lines: 66 On 16.12.2016 18:15, Peter Zijlstra wrote: > On Fri, Dec 16, 2016 at 03:19:43PM +0100, Nicolai H?hnle wrote: >> The concern about picking up a handoff that we didn't request is real, >> though it cannot happen in the first iteration. Perhaps this __mutex_trylock >> can be moved to the end of the loop? See below... > > >>>> @@ -728,7 +800,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, >>>> * or we must see its unlock and acquire. >>>> */ >>>> if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) || >>>> - __mutex_trylock(lock, first)) >>>> + __mutex_trylock(lock, use_ww_ctx || first)) >>>> break; >>>> >>>> spin_lock_mutex(&lock->wait_lock, flags); >> >> Change this code to: >> >> acquired = first && >> mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, >> &waiter); >> spin_lock_mutex(&lock->wait_lock, flags); >> >> if (acquired || >> __mutex_trylock(lock, use_ww_ctx || first)) >> break; > > goto acquired; > > will work lots better. Wasn't explicit enough, sorry. The idea was to get rid of the acquired label and change things so that all paths exit the loop with wait_lock held. That seems cleaner to me. >> } >> >> This changes the trylock to always be under the wait_lock, but we previously >> had that at the beginning of the loop anyway. > >> It also removes back-to-back >> calls to __mutex_trylock when going through the loop; > > Yeah, I had that explicitly. It allows taking the mutex when > mutex_unlock() is still holding the wait_lock. mutex_optimistic_spin() already calls __mutex_trylock, and for the no-spin case, __mutex_unlock_slowpath() only calls wake_up_q() after releasing the wait_lock. So I don't see the purpose of the back-to-back __mutex_trylocks, especially considering that if the first one succeeds, we immediately take the wait_lock anyway. Nicolai >> and for the first >> iteration, there is a __mutex_trylock under wait_lock already before adding >> ourselves to the wait list. > > Correct. >