Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757643AbcC2QhX (ORCPT ); Tue, 29 Mar 2016 12:37:23 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:49135 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757472AbcC2QhV (ORCPT ); Tue, 29 Mar 2016 12:37:21 -0400 Date: Tue, 29 Mar 2016 18:36:54 +0200 From: Peter Zijlstra To: Waiman Long Cc: Ingo Molnar , linux-kernel@vger.kernel.org, Linus Torvalds , Ding Tianhong , Jason Low , Davidlohr Bueso , "Paul E. McKenney" , Thomas Gleixner , Will Deacon , Tim Chen , Waiman Long Subject: Re: [PATCH v3 3/3] locking/mutex: Avoid missed wakeup of mutex waiter Message-ID: <20160329163654.GM3408@twins.programming.kicks-ass.net> References: <1458668804-10138-1-git-send-email-Waiman.Long@hpe.com> <1458668804-10138-4-git-send-email-Waiman.Long@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1458668804-10138-4-git-send-email-Waiman.Long@hpe.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1024 Lines: 21 On Tue, Mar 22, 2016 at 01:46:44PM -0400, Waiman Long wrote: > The current mutex code sets count to -1 and then sets the task > state. This is the same sequence that the mutex unlock path is checking > count and task state. That could lead to a missed wakeup even though > the problem will be cleared when a new waiter enters the waiting queue. > > This patch reverses the order in the locking slowpath so that the task > state is set first before setting the count. This should eliminate > the potential missed wakeup and improve latency. Is it really a problem though? So the 'race' is __mutex_lock_common() against __mutex_fastpath_unlock(), and that is fully serialized as per the atomic instructions. Either the fast unlock path does 1->0 and the lock acquires, or the lock sets -1, at which the unlock fails and enters __mutex_unlock_common_slowpath, which is fully serialised against __mutex_lock_common by the lock->wait_lock. I agree that the code is nicer after your patch, but I don't actually see a problem.