Received: by 10.223.185.116 with SMTP id b49csp6305894wrg; Wed, 28 Feb 2018 07:17:52 -0800 (PST) X-Google-Smtp-Source: AH8x225V2GPr2QD9Q4880E4FxrEUSbnMP+EOf+DPHV3pGTQtC7nAPB44zv5Th7OfqNt/9csN0PKk X-Received: by 10.98.194.87 with SMTP id l84mr18073908pfg.6.1519831072662; Wed, 28 Feb 2018 07:17:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519831072; cv=none; d=google.com; s=arc-20160816; b=0wmZd5cHLsuzHBXi1GRoD9g/JxwQtA6mVcp7HzoMUbfONDlfrfbBftJDF68sh6n1n3 yyXesMi7kcrnLH3sHDH1gMXEQoZNATgBK0ha6NOjpXYw1r1UO55fYMPZNcUOZ2xZTs9K iab8TLYs1vZ517MP6QUeGlTaIn1SvEUKzUx7+x0OgYHev+s1EunXL2KhQ1L7OBJ8hwsM DYBc2f+1diUq8Xa6juuXrVhIj/Y3IU4dUJ1pHkqAOHxydiq44BByK9zkv2+2uHpHetV1 ujl19gRmOMrP5ZK7Vi5tO6xaxDP0relGDTSLAb1E0Agvcu9G+e1JdPLUqueuNzBdj766 B1VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:in-reply-to :subject:cc:to:from:date:arc-authentication-results; bh=1tGaDN//Z9ePOPsPw5VSZWg8NTSQIxAk4ojVKzihvdc=; b=1AcV9eqaRh/CkZb6VvPSO30YNierl58Xd0eIhJzf/CoLC8LOuK4fl4Mpr1Yp0DNhln kgtSAU3apCXMk8W7Gb1b32J23tl4v2sZFK2wFe40j/VbcJpKhG5CT1Yy8BYG3XJDpsnM Umis3wnIXJlbMgxVQM0952QWabLL70GnTTo8yhJRlczRrrvYUwARCIFmDwUqiATnSXWU zcWOMIMOsdz2sEKN/IFanl58qoP4Rnzx9h8SLlkvIx5OaDTqJqlrdhfGUl89HizuglHL K4vig2+/yMJv75+GryRXxU5nJTIh6fEhkedSNmfRg3HnZYob9oxNOS/bIcLK90PAXK2x CYHg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m2-v6si1439993pll.93.2018.02.28.07.17.38; Wed, 28 Feb 2018 07:17:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932465AbeB1PQh (ORCPT + 99 others); Wed, 28 Feb 2018 10:16:37 -0500 Received: from iolanthe.rowland.org ([192.131.102.54]:53700 "HELO iolanthe.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1752395AbeB1PQf (ORCPT ); Wed, 28 Feb 2018 10:16:35 -0500 Received: (qmail 2542 invoked by uid 2102); 28 Feb 2018 10:16:34 -0500 Received: from localhost (sendmail-bs@127.0.0.1) by localhost with SMTP; 28 Feb 2018 10:16:34 -0500 Date: Wed, 28 Feb 2018 10:16:34 -0500 (EST) From: Alan Stern X-X-Sender: stern@iolanthe.rowland.org To: Will Deacon cc: Andrea Parri , , Peter Zijlstra , Boqun Feng , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa Subject: Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked() In-Reply-To: <20180228105631.GA7681@arm.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 28 Feb 2018, Will Deacon wrote: > On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote: > > There appeared to be a certain, recurrent uncertainty concerning the > > semantics of spin_is_locked(), likely a consequence of the fact that > > this semantics remains undocumented or that it has been historically > > linked to the (likewise unclear) semantics of spin_unlock_wait(). > > > > Document this semantics. > > > > Signed-off-by: Andrea Parri > > Cc: Alan Stern > > Cc: Will Deacon > > Cc: Peter Zijlstra > > Cc: Boqun Feng > > Cc: Nicholas Piggin > > Cc: David Howells > > Cc: Jade Alglave > > Cc: Luc Maranget > > Cc: "Paul E. McKenney" > > Cc: Akira Yokosawa > > --- > > include/linux/spinlock.h | 11 +++++++++++ > > 1 file changed, 11 insertions(+) > > > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h > > index 4894d322d2584..2639fdc9a916c 100644 > > --- a/include/linux/spinlock.h > > +++ b/include/linux/spinlock.h > > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock) > > raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ > > }) > > > > +/** > > + * spin_is_locked() - Check whether a spinlock is locked. > > + * @lock: Pointer to the spinlock. > > + * > > + * This function is NOT required to provide any memory ordering > > + * guarantees; it could be used for debugging purposes or, when > > + * additional synchronization is needed, accompanied with other > > + * constructs (memory barriers) enforcing the synchronization. > > + * > > + * Return: 1, if @lock is (found to be) locked; 0, otherwise. > > + */ > > I also don't think this is quite right, since the spin_is_locked check > must be ordered after all prior lock acquisitions (to any lock) on the same > CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f). Pardon me for being a little slow ... I often have trouble understanding what people mean when they talk about ordering. In this case it sounds like you're referring to situations where we want to avoid ABBA deadlocks, and we want to avoid the overhead of actually taking a lock when all that's really needed is to check that nobody else owns the lock. spinlock_t a, b; P0() { spin_lock(&a); while (spin_is_locked(&b)) cpu_relax(); /* Assert P1 isn't in its critical section and won't enter it */ ... spin_unlock(&a); } P1() { spin_lock(&b); if (spin_is_locked(&a)) { spin_unlock(&b); /* P0 wants us not to do anything */ return; } ... spin_unlock(&b); } Is this what you meant? If so, the basic pattern is essentially SB. Taking a lock involves doing a store, and testing a lock involves doing a read. So ignoring all the extraneous stuff and any memory barriers, this looks like: P0() { WRITE_ONCE(a, 1); r0 = READ_ONCE(b); } P1() { WRITE_ONCE(b, 1); r1 = READ_ONCE(a); } exists (0:r0=0 /\ 1:r1=0) And of course it is well known that the SB pattern requires full memory barriers on both sides. Hence the original code should have been written more like this: P0() { spin_lock(&a); smp_mb__after_spinlock(); while (spin_is_locked(&b)) cpu_relax(); /* Assert P1 won't enter its critical section */ ... spin_unlock(&a); } P1() { spin_lock(&b); smp_mb__after_spinlock(); if (spin_is_locked(&a)) { spin_unlock(&b); /* P0 wants us not to do anything */ return; } ... spin_unlock(&b); } with no need for any particular ordering of the spin_is_locked calls. Or have I completely misunderstood your point? Alan