Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753907AbaATPWD (ORCPT ); Mon, 20 Jan 2014 10:22:03 -0500 Received: from merlin.infradead.org ([205.233.59.134]:59413 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751669AbaATPWB (ORCPT ); Mon, 20 Jan 2014 10:22:01 -0500 Date: Mon, 20 Jan 2014 16:21:29 +0100 From: Peter Zijlstra To: Waiman Long Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Arnd Bergmann , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Steven Rostedt , Andrew Morton , Michel Lespinasse , Andi Kleen , Rik van Riel , "Paul E. McKenney" , Linus Torvalds , Raghavendra K T , George Spelvin , Tim Chen , "Aswin Chandramouleeswaran\"" , Scott J Norton Subject: Re: [PATCH v9 1/5] qrwlock: A queue read/write lock implementation Message-ID: <20140120152129.GH31570@twins.programming.kicks-ass.net> References: <1389761047-47566-1-git-send-email-Waiman.Long@hp.com> <1389761047-47566-2-git-send-email-Waiman.Long@hp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1389761047-47566-2-git-send-email-Waiman.Long@hp.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 14, 2014 at 11:44:03PM -0500, Waiman Long wrote: > +#ifndef arch_mutex_cpu_relax > +# define arch_mutex_cpu_relax() cpu_relax() > +#endif Include > +#ifndef smp_load_acquire > +# ifdef CONFIG_X86 > +# define smp_load_acquire(p) \ > + ({ \ > + typeof(*p) ___p1 = ACCESS_ONCE(*p); \ > + barrier(); \ > + ___p1; \ > + }) > +# else > +# define smp_load_acquire(p) \ > + ({ \ > + typeof(*p) ___p1 = ACCESS_ONCE(*p); \ > + smp_mb(); \ > + ___p1; \ > + }) > +# endif > +#endif > + > +#ifndef smp_store_release > +# ifdef CONFIG_X86 > +# define smp_store_release(p, v) \ > + do { \ > + barrier(); \ > + ACCESS_ONCE(*p) = v; \ > + } while (0) > +# else > +# define smp_store_release(p, v) \ > + do { \ > + smp_mb(); \ > + ACCESS_ONCE(*p) = v; \ > + } while (0) > +# endif > +#endif Remove these. > +/* > + * If an xadd (exchange-add) macro isn't available, simulate one with > + * the atomic_add_return() function. > + */ > +#ifdef xadd > +# define qrw_xadd(rw, inc) xadd(&(rw).rwc, inc) > +#else > +# define qrw_xadd(rw, inc) (u32)(atomic_add_return(inc, &(rw).rwa) - inc) > +#endif Is GCC really so stupid that you cannot always use the atomic_add_return()? The x86 atomic_add_return is i + xadd(), so you'll end up with: i + xadd() - i Surely it can just remove the two i terms? > +/** > + * wait_in_queue - Add to queue and wait until it is at the head > + * @lock: Pointer to queue rwlock structure > + * @node: Node pointer to be added to the queue > + */ > +static inline void wait_in_queue(struct qrwlock *lock, struct qrwnode *node) > +{ > + struct qrwnode *prev; > + > + node->next = NULL; > + node->wait = true; > + prev = xchg(&lock->waitq, node); > + if (prev) { > + prev->next = node; > + /* > + * Wait until the waiting flag is off > + */ > + while (smp_load_acquire(&node->wait)) > + arch_mutex_cpu_relax(); > + } > +} Please rebase on top of the MCS lock patches such that this is gone. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/