Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755024Ab3IZGqe (ORCPT ); Thu, 26 Sep 2013 02:46:34 -0400 Received: from mail-ee0-f52.google.com ([74.125.83.52]:59395 "EHLO mail-ee0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751773Ab3IZGqd (ORCPT ); Thu, 26 Sep 2013 02:46:33 -0400 Date: Thu, 26 Sep 2013 08:46:29 +0200 From: Ingo Molnar To: Tim Chen Cc: Ingo Molnar , Andrew Morton , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , linux-kernel@vger.kernel.org, linux-mm Subject: Re: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and locking code into its own file Message-ID: <20130926064629.GB19090@gmail.com> References: <1380147049.3467.67.camel@schen9-DESK> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1380147049.3467.67.camel@schen9-DESK> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2778 Lines: 102 * Tim Chen wrote: > We will need the MCS lock code for doing optimistic spinning for rwsem. > Extracting the MCS code from mutex.c and put into its own file allow us > to reuse this code easily for rwsem. > > Signed-off-by: Tim Chen > Signed-off-by: Davidlohr Bueso > --- > include/linux/mcslock.h | 58 +++++++++++++++++++++++++++++++++++++++++++++++ > kernel/mutex.c | 58 +++++----------------------------------------- > 2 files changed, 65 insertions(+), 51 deletions(-) > create mode 100644 include/linux/mcslock.h > > diff --git a/include/linux/mcslock.h b/include/linux/mcslock.h > new file mode 100644 > index 0000000..20fd3f0 > --- /dev/null > +++ b/include/linux/mcslock.h > @@ -0,0 +1,58 @@ > +/* > + * MCS lock defines > + * > + * This file contains the main data structure and API definitions of MCS lock. A (very) short blurb about what an MCS lock is would be nice here. > + */ > +#ifndef __LINUX_MCSLOCK_H > +#define __LINUX_MCSLOCK_H > + > +struct mcs_spin_node { > + struct mcs_spin_node *next; > + int locked; /* 1 if lock acquired */ > +}; The vertical alignment looks broken here. > + > +/* > + * We don't inline mcs_spin_lock() so that perf can correctly account for the > + * time spent in this lock function. > + */ > +static noinline > +void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node *node) > +{ > + struct mcs_spin_node *prev; > + > + /* Init node */ > + node->locked = 0; > + node->next = NULL; > + > + prev = xchg(lock, node); > + if (likely(prev == NULL)) { > + /* Lock acquired */ > + node->locked = 1; > + return; > + } > + ACCESS_ONCE(prev->next) = node; > + smp_wmb(); > + /* Wait until the lock holder passes the lock down */ > + while (!ACCESS_ONCE(node->locked)) > + arch_mutex_cpu_relax(); > +} > + > +static void mcs_spin_unlock(struct mcs_spin_node **lock, struct mcs_spin_node *node) > +{ > + struct mcs_spin_node *next = ACCESS_ONCE(node->next); > + > + if (likely(!next)) { > + /* > + * Release the lock by setting it to NULL > + */ > + if (cmpxchg(lock, node, NULL) == node) > + return; > + /* Wait until the next pointer is set */ > + while (!(next = ACCESS_ONCE(node->next))) > + arch_mutex_cpu_relax(); > + } > + ACCESS_ONCE(next->locked) = 1; > + smp_wmb(); > +} > + > +#endif We typically close header guards not via a plain #endif but like this: #endif /* __LINUX_SPINLOCK_H */ #endif /* __LINUX_SPINLOCK_TYPES_H */ etc. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/