Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753024AbaBSIzh (ORCPT ); Wed, 19 Feb 2014 03:55:37 -0500 Received: from merlin.infradead.org ([205.233.59.134]:49775 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752380AbaBSIzf (ORCPT ); Wed, 19 Feb 2014 03:55:35 -0500 Date: Wed, 19 Feb 2014 09:55:12 +0100 From: Peter Zijlstra To: Waiman Long Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Arnd Bergmann , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Steven Rostedt , Andrew Morton , Michel Lespinasse , Andi Kleen , Rik van Riel , "Paul E. McKenney" , Linus Torvalds , Raghavendra K T , George Spelvin , Tim Chen , Daniel J Blueman , Alexander Fyodorov , Aswin Chandramouleeswaran , Scott J Norton , Thavatchai Makphaibulchoke Subject: Re: [PATCH v4 1/3] qspinlock: Introducing a 4-byte queue spinlock implementation Message-ID: <20140219085512.GI27965@twins.programming.kicks-ass.net> References: <1392669684-4807-1-git-send-email-Waiman.Long@hp.com> <1392669684-4807-2-git-send-email-Waiman.Long@hp.com> <20140218073951.GZ27965@twins.programming.kicks-ass.net> <5303B6F3.9090001@hp.com> <20140218213748.GT14089@laptop.programming.kicks-ass.net> <530401C9.4090100@hp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <530401C9.4090100@hp.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 18, 2014 at 07:58:49PM -0500, Waiman Long wrote: > On 02/18/2014 04:37 PM, Peter Zijlstra wrote: > >On Tue, Feb 18, 2014 at 02:39:31PM -0500, Waiman Long wrote: > >>>>+ /* > >>>>+ * At the head of the wait queue now > >>>>+ */ > >>>>+ while (true) { > >>>>+ u32 qcode; > >>>>+ int retval; > >>>>+ > >>>>+ retval = queue_get_lock_qcode(lock,&qcode, my_qcode); > >>>>+ if (retval> 0) > >>>>+ ; /* Lock not available yet */ > >>>>+ else if (retval< 0) > >>>>+ /* Lock taken, can release the node& return */ > >>>>+ goto release_node; > >>>>+ else if (qcode != my_qcode) { > >>>>+ /* > >>>>+ * Just get the lock with other spinners waiting > >>>>+ * in the queue. > >>>>+ */ > >>>>+ if (queue_spin_trylock_unfair(lock)) > >>>>+ goto notify_next; > >>>Why is this an option at all? > >>> > >>> > >>Are you referring to the case (qcode != my_qcode)? This condition will be > >>true if more than one tasks have queued up. > >But in no case should we revert to unfair spinning or stealing. We > >should always respect the queueing order. > > > >If the lock tail no longer points to us, then there's further waiters > >and we should wait for ->next and unlock it -- after we've taken the > >lock. > > > > A task will be in this loop when it is already the head of a queue and is > entitled to take the lock. The condition (qcode != my_qcode) is to decide > whether it should just take the lock or take the lock & clear the code > simultaneously. I am a bit cautious to use queue_spin_trylock_unfair() as > there is a possibility that a CPU may run out of the queue node and need to > do unfair busy spinning. No; there is no such possibility. Add BUG_ON(idx>=4) and make sure of it. There's simply no more than 4 contexts what can nest at any one time: task context softirq context hardirq context nmi context And someone contending a spinlock from NMI context should be shot anyway. Getting more nested spinlocks is an absolute hard fail. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/