Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754651Ab3IZWmS (ORCPT ); Thu, 26 Sep 2013 18:42:18 -0400 Received: from g1t0028.austin.hp.com ([15.216.28.35]:45102 "EHLO g1t0028.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754296Ab3IZWmR (ORCPT ); Thu, 26 Sep 2013 18:42:17 -0400 Message-ID: <1380235333.3229.39.camel@j-VirtualBox> Subject: Re: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file From: Jason Low To: Tim Chen Cc: Davidlohr Bueso , Ingo Molnar , Andrew Morton , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , linux-kernel@vger.kernel.org, linux-mm Date: Thu, 26 Sep 2013 15:42:13 -0700 In-Reply-To: <1380231702.3467.85.camel@schen9-DESK> References: <1380147049.3467.67.camel@schen9-DESK> <1380226007.2170.2.camel@buesod1.americas.hpqcorp.net> <1380226997.2602.11.camel@j-VirtualBox> <1380228059.2170.10.camel@buesod1.americas.hpqcorp.net> <1380229794.2602.36.camel@j-VirtualBox> <1380231702.3467.85.camel@schen9-DESK> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.2.3-0ubuntu6 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5139 Lines: 121 On Thu, 2013-09-26 at 14:41 -0700, Tim Chen wrote: > On Thu, 2013-09-26 at 14:09 -0700, Jason Low wrote: > > On Thu, 2013-09-26 at 13:40 -0700, Davidlohr Bueso wrote: > > > On Thu, 2013-09-26 at 13:23 -0700, Jason Low wrote: > > > > On Thu, 2013-09-26 at 13:06 -0700, Davidlohr Bueso wrote: > > > > > On Thu, 2013-09-26 at 12:27 -0700, Jason Low wrote: > > > > > > On Wed, Sep 25, 2013 at 3:10 PM, Tim Chen wrote: > > > > > > > We will need the MCS lock code for doing optimistic spinning for rwsem. > > > > > > > Extracting the MCS code from mutex.c and put into its own file allow us > > > > > > > to reuse this code easily for rwsem. > > > > > > > > > > > > > > Signed-off-by: Tim Chen > > > > > > > Signed-off-by: Davidlohr Bueso > > > > > > > --- > > > > > > > include/linux/mcslock.h | 58 +++++++++++++++++++++++++++++++++++++++++++++++ > > > > > > > kernel/mutex.c | 58 +++++----------------------------------------- > > > > > > > 2 files changed, 65 insertions(+), 51 deletions(-) > > > > > > > create mode 100644 include/linux/mcslock.h > > > > > > > > > > > > > > diff --git a/include/linux/mcslock.h b/include/linux/mcslock.h > > > > > > > new file mode 100644 > > > > > > > index 0000000..20fd3f0 > > > > > > > --- /dev/null > > > > > > > +++ b/include/linux/mcslock.h > > > > > > > @@ -0,0 +1,58 @@ > > > > > > > +/* > > > > > > > + * MCS lock defines > > > > > > > + * > > > > > > > + * This file contains the main data structure and API definitions of MCS lock. > > > > > > > + */ > > > > > > > +#ifndef __LINUX_MCSLOCK_H > > > > > > > +#define __LINUX_MCSLOCK_H > > > > > > > + > > > > > > > +struct mcs_spin_node { > > > > > > > + struct mcs_spin_node *next; > > > > > > > + int locked; /* 1 if lock acquired */ > > > > > > > +}; > > > > > > > + > > > > > > > +/* > > > > > > > + * We don't inline mcs_spin_lock() so that perf can correctly account for the > > > > > > > + * time spent in this lock function. > > > > > > > + */ > > > > > > > +static noinline > > > > > > > +void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node) > > > > > > > +{ > > > > > > > + struct mcs_spin_node *prev; > > > > > > > + > > > > > > > + /* Init node */ > > > > > > > + node->locked = 0; > > > > > > > + node->next = NULL; > > > > > > > + > > > > > > > + prev = xchg(lock, node); > > > > > > > + if (likely(prev == NULL)) { > > > > > > > + /* Lock acquired */ > > > > > > > + node->locked = 1; > > > > > > > > > > > > If we don't spin on the local node, is it necessary to set this variable? > > > > > > > > > > I don't follow, the whole idea is to spin on the local variable. > > > > > > > > If prev == NULL, doesn't that mean it won't proceed to spin on the > > > > variable because the lock is already free and we call return? In that > > > > case where we directly acquire the lock, I was wondering if it is > > > > necessary to set node->locked = 1. > > > > > > Yes, that's true, but we need to flag the lock as acquired (the node's > > > lock is initially set to unlocked), otherwise others trying to acquire > > > the lock can spin forever: > > > > > > /* Wait until the lock holder passes the lock down */ > > > while (!ACCESS_ONCE(node->locked)) > > > arch_mutex_cpu_relax(); > > > > > > The ->locked variable in this implementation refers to if the lock is > > > acquired, and *not* to if busy-waiting is necessary. > > > > hmm, others threads acquiring the lock will be spinning on their own > > local nodes, not this node's node->locked. And if prev == NULL, the > > current thread won't be reading it's node->lock either since we return. > > So what other thread is going to be reading this node's node->lock? > > > > Thanks, > > Jason > > I think setting node->locked = 1 for the prev==NULL case is not > necessary functionally, but was done for semantics consistency. Okay, that would makes sense for consistency because we always first set node->lock = 0 at the top of the function. If we prefer to optimize this a bit though, perhaps we can first move the node->lock = 0 so that it gets executed after the "if (likely(prev == NULL)) {}" code block and then delete "node->lock = 1" inside the code block. static noinline void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node) { struct mcs_spin_node *prev; /* Init node */ node->next = NULL; prev = xchg(lock, node); if (likely(prev == NULL)) { /* Lock acquired */ return; } node->locked = 0; ACCESS_ONCE(prev->next) = node; smp_wmb(); /* Wait until the lock holder passes the lock down */ while (!ACCESS_ONCE(node->locked)) arch_mutex_cpu_relax(); } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/