Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751630AbbDIFhe (ORCPT ); Thu, 9 Apr 2015 01:37:34 -0400 Received: from mail-wg0-f47.google.com ([74.125.82.47]:32907 "EHLO mail-wg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750953AbbDIFha (ORCPT ); Thu, 9 Apr 2015 01:37:30 -0400 Date: Thu, 9 Apr 2015 07:37:25 +0200 From: Ingo Molnar To: Jason Low Cc: Peter Zijlstra , Linus Torvalds , Davidlohr Bueso , Tim Chen , Aswin Chandramouleeswaran , LKML Subject: Re: [PATCH 2/2] locking/rwsem: Use a return variable in rwsem_spin_on_owner() Message-ID: <20150409053725.GB13871@gmail.com> References: <1428521960-5268-1-git-send-email-jason.low2@hp.com> <1428521960-5268-3-git-send-email-jason.low2@hp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1428521960-5268-3-git-send-email-jason.low2@hp.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3308 Lines: 101 * Jason Low wrote: > Ingo suggested for mutex_spin_on_owner() that having multiple return > statements is not the cleanest approach, especially when holding locks. > > The same thing applies to the rwsem variant. This patch rewrites > much of this function to use a "ret" return value. > > Signed-off-by: Jason Low > --- > kernel/locking/rwsem-xadd.c | 25 ++++++++++++------------- > 1 files changed, 12 insertions(+), 13 deletions(-) > > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c > index 3417d01..b1c9156 100644 > --- a/kernel/locking/rwsem-xadd.c > +++ b/kernel/locking/rwsem-xadd.c > @@ -327,38 +327,37 @@ done: > static noinline > bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner) > { > - long count; > + bool ret = true; > > rcu_read_lock(); > while (sem->owner == owner) { > /* > * Ensure we emit the owner->on_cpu, dereference _after_ > - * checking sem->owner still matches owner, if that fails, > - * owner might point to free()d memory, if it still matches, > + * checking sem->owner still matches owner. If that fails, > + * owner might point to freed memory. If it still matches, > * the rcu_read_lock() ensures the memory stays valid. > */ > barrier(); > > - /* abort spinning when need_resched or owner is not running */ > + /* Abort spinning when need_resched or owner is not running. */ > if (!owner->on_cpu || need_resched()) { > - rcu_read_unlock(); > - return false; > + ret = false; > + break; > } > > cpu_relax_lowlatency(); > } > rcu_read_unlock(); > > - if (READ_ONCE(sem->owner)) > - return true; /* new owner, continue spinning */ > - > /* > * When the owner is not set, the lock could be free or > - * held by readers. Check the counter to verify the > - * state. > + * held by readers. Check the counter to verify the state. > */ > - count = READ_ONCE(sem->count); > - return (count == 0 || count == RWSEM_WAITING_BIAS); > + if (!READ_ONCE(sem->owner)) { > + long count = READ_ONCE(sem->count); > + ret = (count == 0 || count == RWSEM_WAITING_BIAS); > + } > + return ret; > } > > static bool rwsem_optimistic_spin(struct rw_semaphore *sem) The 'break' path does not seem to be equivalent, we used to do: > - rcu_read_unlock(); > - return false; and now we'll do: > + ret = false; ... > + if (!READ_ONCE(sem->owner)) { > + long count = READ_ONCE(sem->count); it's harmless (we do one more round of checking), but that's not an equivalent transformation and slows down the preemption trigger a (tiny) bit, because the chance that we actually catch the lock when breaking out early is vanishingly small. (It might in fact do the wrong thing in returning true if need_resched() is set and we've switched owners in that small window.) Given how dissimilar the return path is in this case, I'm not sure it's worth sharing it. This might be one of the few cases where separate return statements is the better solution. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/