Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S970280AbdDTPFR (ORCPT ); Thu, 20 Apr 2017 11:05:17 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:47403 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S967465AbdDTPFO (ORCPT ); Thu, 20 Apr 2017 11:05:14 -0400 Date: Thu, 20 Apr 2017 08:03:21 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com, dvyukov@google.com, will.deacon@arm.com Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to sync_exp_work_done() Reply-To: paulmck@linux.vnet.ibm.com References: <1492016149-18834-7-git-send-email-paulmck@linux.vnet.ibm.com> <20170413091832.phnfppqjjy6sislo@hirez.programming.kicks-ass.net> <20170413161042.GA3956@linux.vnet.ibm.com> <20170413162409.q5gsqfytjyirgfep@hirez.programming.kicks-ass.net> <20170413165755.GJ3956@linux.vnet.ibm.com> <20170413171027.snjqn4u54t2kdzgx@hirez.programming.kicks-ass.net> <20170413173951.GM3956@linux.vnet.ibm.com> <20170413175136.5qnzvqrmzyuvlqsj@hirez.programming.kicks-ass.net> <20170419232352.GC3956@linux.vnet.ibm.com> <20170420111743.qyn3zwcmwbx4kngu@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170420111743.qyn3zwcmwbx4kngu@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17042015-0008-0000-0000-000001FD4B5C X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006945; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000208; SDB=6.00850125; UDB=6.00419848; IPR=6.00628754; BA=6.00005305; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015110; XFM=3.00000013; UTC=2017-04-20 15:05:09 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17042015-0009-0000-0000-000034CBA373 Message-Id: <20170420150321.GM3956@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-04-20_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1704200120 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6044 Lines: 144 On Thu, Apr 20, 2017 at 01:17:43PM +0200, Peter Zijlstra wrote: > On Wed, Apr 19, 2017 at 04:23:52PM -0700, Paul E. McKenney wrote: > > On Thu, Apr 13, 2017 at 07:51:36PM +0200, Peter Zijlstra wrote: > > > On Thu, Apr 13, 2017 at 10:39:51AM -0700, Paul E. McKenney wrote: > > > > > > > Well, if there are no objections, I will fix up the smp_mb__before_atomic() > > > > and smp_mb__after_atomic() pieces. > > > > > > Feel free. > > > > How about if I add this in the atomic_ops.txt description of these > > two primitives? > > > > Preceding a non-value-returning read-modify-write atomic > > operation with smp_mb__before_atomic() and following it with > > smp_mb__after_atomic() provides the same full ordering that is > > provided by value-returning read-modify-write atomic operations. > > That seems correct. It also already seems a direct implication of the > extant text though. But as you're wont to say, people need repetition > and pointing out the obvious etc.. Especially given that it never is obvious until you understand it. At which point you don't need the documentation. Therefore, documentation is mostly useful to people who are missing a few pieces of the overall puzzle. Which we all were at some time in the past. ;-) > The way I read that document, specifically this: > > "For example, smp_mb__before_atomic() can be used like so: > > obj->dead = 1; > smp_mb__before_atomic(); > atomic_dec(&obj->ref_count); > > It makes sure that all memory operations preceding the atomic_dec() > call are strongly ordered with respect to the atomic counter > operation." > > Leaves no question that these operations must be full barriers. > > And therefore, your paragraph that basically states that: > > smp_mb__before_atomic(); > atomic_inc_return_relaxed(); > smp_mb__after_atomic(); > > equals: > > atomic_inc_return(); > > is implied, no? That is a reasonable argument, but some very intelligent people didn't make that leap when reading it, so more redundancy appears to be needed. > > commit 5789953adc360b4d3685dc89513655e6bfb83980 > > Author: Paul E. McKenney > > Date: Wed Apr 19 16:20:07 2017 -0700 > > > > atomics: Add header comment so spin_unlock_wait() and spin_is_locked() > > > > There is material describing the ordering guarantees provided by > > spin_unlock_wait() and spin_is_locked(), but it is not necessarily > > easy to find. This commit therefore adds a docbook header comment > > to both functions informally describing their semantics. > > > > Signed-off-by: Paul E. McKenney > > > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h > > index 59248dcc6ef3..2647dc7f3ea9 100644 > > --- a/include/linux/spinlock.h > > +++ b/include/linux/spinlock.h > > @@ -369,11 +369,49 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock) > > raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ > > }) > > > > +/** > > + * spin_unlock_wait - Interpose between successive critical sections > > + * @lock: the spinlock whose critical sections are to be interposed. > > + * > > + * Semantically this is equivalent to a spin_lock() immediately > > + * followed by a spin_unlock(). However, most architectures have > > + * more efficient implementations in which the spin_unlock_wait() > > + * cannot block concurrent lock acquisition, and in some cases > > + * where spin_unlock_wait() does not write to the lock variable. > > + * Nevertheless, spin_unlock_wait() can have high overhead, so if > > + * you feel the need to use it, please check to see if there is > > + * a better way to get your job done. > > + * > > + * The ordering guarantees provided by spin_unlock_wait() are: > > + * > > + * 1. All accesses preceding the spin_unlock_wait() happen before > > + * any accesses in later critical sections for this same lock. > > + * 2. All accesses following the spin_unlock_wait() happen after > > + * any accesses in earlier critical sections for this same lock. > > + */ > > static __always_inline void spin_unlock_wait(spinlock_t *lock) > > { > > raw_spin_unlock_wait(&lock->rlock); > > } > > ACK Very good, adding your Acked-by. > > +/** > > + * spin_is_locked - Conditionally interpose after prior critical sections > > + * @lock: the spinlock whose critical sections are to be interposed. > > + * > > + * Semantically this is equivalent to a spin_trylock(), and, if > > + * the spin_trylock() succeeds, immediately followed by a (mythical) > > + * spin_unlock_relaxed(). The return value from spin_trylock() is returned > > + * by spin_is_locked(). Note that all current architectures have extremely > > + * efficient implementations in which the spin_is_locked() does not even > > + * write to the lock variable. > > + * > > + * A successful spin_is_locked() primitive in some sense "takes its place" > > + * after some critical section for the lock in question. Any accesses > > + * following a successful spin_is_locked() call will therefore happen > > + * after any accesses by any of the preceding critical section for that > > + * same lock. Note however, that spin_is_locked() provides absolutely no > > + * ordering guarantees for code preceding the call to that spin_is_locked(). > > + */ > > static __always_inline int spin_is_locked(spinlock_t *lock) > > { > > return raw_spin_is_locked(&lock->rlock); > > I'm current confused on this one. The case listed in the qspinlock code > doesn't appear to exist in the kernel anymore (or at least, I'm having > trouble finding it). > > That said, I'm also not sure spin_is_locked() provides an acquire, as > that comment has an explicit smp_acquire__after_ctrl_dep(); OK, I have dropped this portion of the patch for the moment. Going forward, exactly what semantics do you believe spin_is_locked() provides? Do any of the current implementations need to change to provide the semantics expected by the various use cases? Thanx, Paul