Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751782AbdF1RDj (ORCPT ); Wed, 28 Jun 2017 13:03:39 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:35216 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751592AbdF1RDc (ORCPT ); Wed, 28 Jun 2017 13:03:32 -0400 Date: Wed, 28 Jun 2017 10:03:21 -0700 From: "Paul E. McKenney" To: Alan Stern Cc: Linus Torvalds , Andrea Parri , Linux Kernel Mailing List , priyalee.kushwaha@intel.com, drozdziak1@gmail.com, Arnd Bergmann , ldr709@gmail.com, Thomas Gleixner , Peter Zijlstra , Josh Triplett , Nicolas Pitre , Krister Johansen , Vegard Nossum , dcb314@hotmail.com, Wu Fengguang , Frederic Weisbecker , Rik van Riel , Steven Rostedt , Ingo Molnar , Luc Maranget , Jade Alglave Subject: Re: [GIT PULL rcu/next] RCU commits for 4.13 Reply-To: paulmck@linux.vnet.ibm.com References: <20170627233748.GZ3721@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17062817-0052-0000-0000-00000230E4B0 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007291; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00879989; UDB=6.00438641; IPR=6.00660144; BA=6.00005445; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015993; XFM=3.00000015; UTC=2017-06-28 17:03:24 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17062817-0053-0000-0000-000051227218 Message-Id: <20170628170321.GQ3721@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-28_11:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706280272 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5834 Lines: 148 On Wed, Jun 28, 2017 at 11:31:55AM -0400, Alan Stern wrote: > On Tue, 27 Jun 2017, Paul E. McKenney wrote: > > > On Tue, Jun 27, 2017 at 02:48:18PM -0700, Linus Torvalds wrote: > > > On Tue, Jun 27, 2017 at 1:58 PM, Paul E. McKenney > > > wrote: > > > > > > > > So what next? > > > > > > > > One option would be to weaken the definition of spin_unlock_wait() so > > > > that it had acquire semantics but not release semantics. Alternatively, > > > > we could keep the full empty-critical-section semantics and add memory > > > > barriers to spin_unlock_wait() implementations, perhaps as shown in the > > > > patch below. I could go either way, though I do have some preference > > > > for the stronger semantics. > > > > > > > > Thoughts? > > > > > > I would prefer to just say > > > > > > - document that spin_unlock_wait() has acquire semantics > > > > > > - mindlessly add the smp_mb() to all users > > > > > > - let users then decide if they are ok with just acquire > > > > > > That's partly because I think we actually have *fewer* users than we > > > have implementations of spin_unlock_wait(). So adding a few smp_mb()'s > > > in the users is actually likely the smaller change. > > > > You are right about that! There are only five invocations of > > spin_unlock_wait() in the kernel, with a sixth that has since been > > converted to spin_lock() immediately followed by spin_unlock(). > > > > > But it's also because then that allows people who *can* say that > > > acquire is sufficient to just use it. People who use > > > spin_unlock_wait() tend to have some odd performance reason to do so, > > > so I think allowing them to use the more light-weight memory ordering > > > if it works for them is a good idea. > > > > > > But finally, it's partly because I think "acquire" semantics are > > > actually the saner ones that we can explain the logic for much more > > > clearly. > > > > > > Basically, acquire semantics means that you are guaranteed to see any > > > changes that were done inside a previously locked region. > > > > > > Doesn't that sound like sensible semantics? > > > > It is the semantics that most implementations of spin_unlock_wait() > > provide. Of the six invocations, two of them very clearly rely > > only on the acquire semantics and two others already have the needed > > memory barriers in place. I have queued one patch to add smp_mb() > > to the remaining spin_unlock_wait() of the surviving five instances, > > and another patch to convert the spin_lock/unlock pair to smp_mb() > > followed by spin_unlock_wait(). > > > > So, yes, it is a sensible set of semantics. At this point, agreeing > > -any- reasonable semantics would be good, as it would allow us to get > > locking added to the prototype Linux-kernel memory model. ;-) > > > > > Then, the argument for "smp_mb()" (before the spin_unlock_wait()) becomes: > > > > > > - I did a write that will affect any future lock takes > > > > > > - the smp_mb() now means that that write will be ordered wrt the > > > acquire that guarantees we've seen all old actions taken by a lock. > > > > > > Does those kinds of semantics make sense to people? > > The problem is that adding smp_mb() before spin_unlock_wait() does not > provide release semantics, as Andrea has pointed out. ISTM that when > people want full release & acquire semantics, they should just use > "spin_lock(); spin_unlock();". Well, from what I can see, this thread is certainly demonstrating to my satisfaction that we really do need a memory model. ;-) I agree that calling a loops of loads a "release" is at best confusing, and certainly conflicts with all know nomenclature. So let's forget "release" or not and try to agree on litmus tests. Of course, as we both know and as noted long ago on LKML, we cannot -define- semantics via litmus tests, but we can use litmus tests to -check- semantics. First, modeling lock acquisition with xchg(), which is fully ordered (given that locking is still very much under development in our model): C SUW+or-ow+l-ow-or {} P0(int *x, int *y, int *my_lock) { r0 = READ_ONCE(*x); WRITE_ONCE(*y, 1); smp_mb(); r1 = READ_ONCE(*my_lock); } P1(int *x, int *y, int *my_lock) { r1 = xchg(my_lock, 1); WRITE_ONCE(*x, 1); r0 = READ_ONCE(*y); } exists (0:r0=1 /\ 0:r1=0 /\ 1:r0=0 /\ 1:r1=0) This gives "Positive: 0 Negative: 5". But to your point, replacing "xchg()" with "xchg_acquire()" gets us "Positive: 1 Negative: 7". But xchg_acquire() would in fact work on x86 because the xchg instruction does full ordering in any case, right? (I believe that this also applies to the other TSO architectures, but have not checked.) For PowerPC (and I believe ARM), the leading smp_mb() suffices, as illustrated by this litmus test: PPC spin_unlock_wait_st.litmus "" { l=0; 0:r1=1; 0:r3=42; 0:r4=x; 0:r10=0; 0:r12=l; 1:r1=1; 1:r3=42; 1:r4=x; 1:r10=0; 1:r11=0; 1:r12=l; } P0 | P1 ; stw r1,0(r4) | lwarx r11,r10,r12 ; sync | cmpwi r11,0 ; lwarx r11,r10,r12 | bne Fail1 ; stwcx. r11,r10,r12 | stwcx. r1,r10,r12 ; bne Fail0 | bne Fail1 ; lwz r3,0(r12) | lwsync ; Fail0: | lwz r3,0(r4) ; | Fail1: ; exists (0:r3=0 /\ 1:r3=0) So what am I missing here? > If there are any places where this would add unacceptable overhead, > maybe those places need some rethinking. For instance, perhaps we > could add a separate primitive that provides only release semantics. > (But would using the new primitive together with spin_unlock_wait > really be significantly better than lock-unlock?) At the moment, I have no idea on the relative overheads. ;-) Thanx, Paul