Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751248AbbG3Op0 (ORCPT ); Thu, 30 Jul 2015 10:45:26 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:33679 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750767AbbG3OpY (ORCPT ); Thu, 30 Jul 2015 10:45:24 -0400 Date: Thu, 30 Jul 2015 16:44:55 +0200 From: Peter Zijlstra To: "Paul E. McKenney" Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com, dave@stgolabs.net, waiman.long@hp.com Subject: Re: [PATCH tip/core/rcu 19/19] rcu: Add fastpath bypassing funnel locking Message-ID: <20150730144455.GZ19282@twins.programming.kicks-ass.net> References: <20150717232901.GA22511@linux.vnet.ibm.com> <1437175764-24096-1-git-send-email-paulmck@linux.vnet.ibm.com> <1437175764-24096-19-git-send-email-paulmck@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1437175764-24096-19-git-send-email-paulmck@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1596 Lines: 44 On Fri, Jul 17, 2015 at 04:29:24PM -0700, Paul E. McKenney wrote: > /* > + * First try directly acquiring the root lock in order to reduce > + * latency in the common case where expedited grace periods are > + * rare. We check mutex_is_locked() to avoid pathological levels of > + * memory contention on ->exp_funnel_mutex in the heavy-load case. > + */ > + rnp0 = rcu_get_root(rsp); > + if (!mutex_is_locked(&rnp0->exp_funnel_mutex)) { > + if (mutex_trylock(&rnp0->exp_funnel_mutex)) { > + if (sync_exp_work_done(rsp, rnp0, NULL, > + &rsp->expedited_workdone0, s)) > + return NULL; > + return rnp0; > + } > + } So our 'new' locking primitives do things like: static __always_inline int queued_spin_trylock(struct qspinlock *lock) { if (!atomic_read(&lock->val) && (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) return 1; return 0; } mutexes do not do this. Now I suppose the question is, does that extra read slow down the (common) uncontended case? (remember, we should optimize locks for the uncontended case, heavy lock contention should be fixed with better locking schemes, not lock implementations). Davidlohr, Waiman, do we have data on this? If the extra read before the cmpxchg() does not hurt, we should do the same for mutex and make the above redundant. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/