Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754506Ab3JBThn (ORCPT ); Wed, 2 Oct 2013 15:37:43 -0400 Received: from g4t0016.houston.hp.com ([15.201.24.19]:22873 "EHLO g4t0016.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754459Ab3JBThl (ORCPT ); Wed, 2 Oct 2013 15:37:41 -0400 Message-ID: <524C75F6.1080701@hp.com> Date: Wed, 02 Oct 2013 15:37:26 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Jason Low CC: Tim Chen , Davidlohr Bueso , Ingo Molnar , Andrew Morton , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , linux-kernel@vger.kernel.org, linux-mm Subject: Re: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file References: <1380147049.3467.67.camel@schen9-DESK> <1380226007.2170.2.camel@buesod1.americas.hpqcorp.net> <1380226997.2602.11.camel@j-VirtualBox> <1380228059.2170.10.camel@buesod1.americas.hpqcorp.net> <1380229794.2602.36.camel@j-VirtualBox> <1380231702.3467.85.camel@schen9-DESK> <1380235333.3229.39.camel@j-VirtualBox> <524C71C1.9060408@hp.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2153 Lines: 51 On 10/02/2013 03:30 PM, Jason Low wrote: > On Wed, Oct 2, 2013 at 12:19 PM, Waiman Long wrote: >> On 09/26/2013 06:42 PM, Jason Low wrote: >>> On Thu, 2013-09-26 at 14:41 -0700, Tim Chen wrote: >>>> Okay, that would makes sense for consistency because we always >>>> first set node->lock = 0 at the top of the function. >>>> >>>> If we prefer to optimize this a bit though, perhaps we can >>>> first move the node->lock = 0 so that it gets executed after the >>>> "if (likely(prev == NULL)) {}" code block and then delete >>>> "node->lock = 1" inside the code block. >>>> >>>> static noinline >>>> void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node >>>> *node) >>>> { >>>> struct mcs_spin_node *prev; >>>> >>>> /* Init node */ >>>> node->next = NULL; >>>> >>>> prev = xchg(lock, node); >>>> if (likely(prev == NULL)) { >>>> /* Lock acquired */ >>>> return; >>>> } >>>> node->locked = 0; >> >> You can remove the locked flag setting statement inside if (prev == NULL), >> but you can't clear the locked flag after xchg(). In the interval between >> xchg() and locked=0, the previous lock owner may come in and set the flag. >> Now if your clear it, the thread will loop forever. You have to clear it >> before xchg(). > Yes, in my most recent version, I left locked = 0 in its original > place so that the xchg() can act as a barrier for it. > > The other option would have been to put another barrier after locked = > 0. I went with leaving locked = 0 in its original place so that we > don't need that extra barrier. I don't think putting another barrier after locked=0 will work. Chronologically, the flag must be cleared before the node address is saved in the lock field. There is no way to guarantee that except by putting the locked=0 before xchg(). -Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/