Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2399600imm; Mon, 24 Sep 2018 03:49:41 -0700 (PDT) X-Google-Smtp-Source: ACcGV62JnAXljGyuuuGIesSr/OhrPrexMA/7YN4/bVin4LEYxtaVA+R/S3ebOWLYwXdQLNzvCYNk X-Received: by 2002:a63:4384:: with SMTP id q126-v6mr9048670pga.142.1537786181550; Mon, 24 Sep 2018 03:49:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537786181; cv=none; d=google.com; s=arc-20160816; b=WZ7qu4nSwLefrDEA9B61DPgOOHtsyH6UF3gYDdIhAz+bRFvMb7x3xg171/0eqPPgHq VUsQ2krwMYF9f76aFriORgcqfs0a9qq1KWkU/U0dnKgiEagzbQoNYzhDJ37BR19vPoWM an8UeQrwPYDMWhY1VPHLIxIR2ocVjKQGBSOkuP6J1/CC7EdWfg6eCojTkpVqKUgjbbPd ueEpVlxJnPUQmfXNuYRFYYulAahu7jnEc7+I3YcCtthVDZPNqAoa/EXLeWAyMQtcudax Gv2Rinfvs3iUuGh9MJ4CPy2AOGmbGEc4sLBe/JmcM735acY/WQ6i6L0ZbfKxvH7oKCu3 koeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature; bh=lE7GLv+I95SJzhLDRzuZARrHrYBlTOPVcftV95LpZkg=; b=wIvh85TBflpkAv8p8nMNPVFNKT6xIBvhKhI3F1ReUeVHkKpDc14Ayr/G3NPyro64UG 0VnvUfRUeYiHplLBvoy3egNwunAZkoS8+r0eUQnn7wRFmLNUMryZ5jc3BflOuhJy/nYn oZz4E5t/xR25uKli+mVcMBZJvDDzlZmmbhBYvus0tfqBaFszxxaLPH/ivSSLB4CjsIpL P/Fanb7szUY0wJS80kwWzkwP+8goCcNQpNsR1thX8u8chYlCnmOkOt/wshKmRExRAl+f akP4ICM/sPVCS8AV1DbjT1OBtnhiDfBHdo1xmUdi/C29awFqUGQpjtT8YkrOZ3yGIJ5E T2BQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amarulasolutions.com header.s=google header.b=FqZxiRmj; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v66-v6si34667251pfb.368.2018.09.24.03.49.26; Mon, 24 Sep 2018 03:49:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amarulasolutions.com header.s=google header.b=FqZxiRmj; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727594AbeIXQqa (ORCPT + 99 others); Mon, 24 Sep 2018 12:46:30 -0400 Received: from mail-ed1-f66.google.com ([209.85.208.66]:45798 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726157AbeIXQqa (ORCPT ); Mon, 24 Sep 2018 12:46:30 -0400 Received: by mail-ed1-f66.google.com with SMTP id j2-v6so2526106edp.12 for ; Mon, 24 Sep 2018 03:45:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amarulasolutions.com; s=google; h=from:to:cc:subject:date:message-id; bh=lE7GLv+I95SJzhLDRzuZARrHrYBlTOPVcftV95LpZkg=; b=FqZxiRmj4nk6MbPrnsVlF6AHO/DeLTDToUt4gB+tulXJcKPooyPojW3gPjpcdBb06P ZJmpBT3j7wek9Dd3s1w/5z/kZr3LsKq0M7QEUevi6ctlOspH3a32/tp1/thJq/7cBw0G t5OEWCv4OerQg0YA1zvyNREmTiXbYIBtd9TEw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=lE7GLv+I95SJzhLDRzuZARrHrYBlTOPVcftV95LpZkg=; b=YiKa9QgbEhSGwkOvlnbJpzptw3v1kUVKIjJkpwiUmYYMzs6y5sfv2JYE5x/kSVaHOc KdYtQrimT4j36zfr3KS/TLqqdPA5grjniiQCcuRCb/ZQhANCqLGxHspbJvPblQUcg3Sz g5ydMvm9Df5x/2s4i5eRFvzbAvQuw6R39Lm64U0Hcb85QpBRTGa2FmBih09du0/Xzlfq m8A9OpMo80rkFXfh8k/mJwIpBKR2e8wkjhFdQUyDskWmFPpTWHpil6ey/tbxUJ4gvRNh YTo3lIbdosRlVNmnlE8EQ4Mt2GhPPIR/c4+3XLptU66+wkfrIyZASPZVcuDjur0BmsBF mUog== X-Gm-Message-State: ABuFfogvTk4d3bqFu6A/BPCej3YATkzfM4++/PXb6xpsAmsUhjpob/cN v6JCj9UPrX6tS7G2Gz2Jxc9NfRZFmR530w== X-Received: by 2002:a50:eb96:: with SMTP id y22-v6mr15770856edr.38.1537785900953; Mon, 24 Sep 2018 03:45:00 -0700 (PDT) Received: from andrea.amarulasolutions.com (85.100.broadband17.iol.cz. [109.80.100.85]) by smtp.gmail.com with ESMTPSA id x22-v6sm5415402edb.8.2018.09.24.03.44.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 03:45:00 -0700 (PDT) From: Andrea Parri To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: Andrea Parri , Alan Stern , Will Deacon , Peter Zijlstra , Boqun Feng , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig Subject: [PATCH] tools/memory-model: Model smp_mb__after_unlock_lock() Date: Mon, 24 Sep 2018 12:44:49 +0200 Message-Id: <20180924104449.8211-1-andrea.parri@amarulasolutions.com> X-Mailer: git-send-email 2.17.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From the header comment for smp_mb__after_unlock_lock(): "Place this after a lock-acquisition primitive to guarantee that an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies if the UNLOCK and LOCK are executed by the same CPU or if the UNLOCK and LOCK operate on the same lock variable." This formalizes the above guarantee by defining (new) mb-links according to the law: ([M] ; po ; [UL] ; (co | po) ; [LKW] ; fencerel(After-unlock-lock) ; [M]) where the component ([UL] ; co ; [LKW]) identifies "UNLOCK+LOCK pairs on the same lock variable" and the component ([UL] ; po ; [LKW]) identifies "UNLOCK+LOCK pairs executed by the same CPU". In particular, the LKMM forbids the following two behaviors (the second litmus test below is based on Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html c.f., Section "Tree RCU Grace Period Memory Ordering Building Blocks"): C after-unlock-lock-same-cpu (* * Result: Never *) {} P0(spinlock_t *s, spinlock_t *t, int *x, int *y) { int r0; spin_lock(s); WRITE_ONCE(*x, 1); spin_unlock(s); spin_lock(t); smp_mb__after_unlock_lock(); r0 = READ_ONCE(*y); spin_unlock(t); } P1(int *x, int *y) { int r0; WRITE_ONCE(*y, 1); smp_mb(); r0 = READ_ONCE(*x); } exists (0:r0=0 /\ 1:r0=0) C after-unlock-lock-same-lock-variable (* * Result: Never *) {} P0(spinlock_t *s, int *x, int *y) { int r0; spin_lock(s); WRITE_ONCE(*x, 1); r0 = READ_ONCE(*y); spin_unlock(s); } P1(spinlock_t *s, int *y, int *z) { int r0; spin_lock(s); smp_mb__after_unlock_lock(); WRITE_ONCE(*y, 1); r0 = READ_ONCE(*z); spin_unlock(s); } P2(int *z, int *x) { int r0; WRITE_ONCE(*z, 1); smp_mb(); r0 = READ_ONCE(*x); } exists (0:r0=0 /\ 1:r0=0 /\ 2:r0=0) Signed-off-by: Andrea Parri Cc: Alan Stern Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Nicholas Piggin Cc: David Howells Cc: Jade Alglave Cc: Luc Maranget Cc: "Paul E. McKenney" Cc: Akira Yokosawa Cc: Daniel Lustig --- NOTES. - A number of equivalent formalizations seems available; for example, we could replace "co" in the law above with "coe" (by "coherence") and we could limit "coe" to "singlestep(coe)" (by the "prop" term). These changes did not show significant performance improvement and they looked slightly less readable to me, hence... - The mb-links order memory accesses po-_before_ the lock-release to memory accesses po-_after_ the lock-acquire; in part., this forma- lization does _not_ forbid the following behavior (after A. Stern): C po-in-after-unlock-lock {} P0(spinlock_t *s, spinlock_t *t, int *y) { int r0; spin_lock(s); spin_unlock(s); spin_lock(t); smp_mb__after_unlock_lock(); r0 = READ_ONCE(*y); spin_unlock(t); } P1(spinlock_t *s, int *y) { int r0; WRITE_ONCE(*y, 1); smp_mb(); r0 = spin_is_locked(s); } exists (0:r0=0 /\ 1:r0=0) - I'm not aware of currently supported architectures (implementations) of smp_mb__after_unlock_lock() and spin_{lock,unlock}() which would violate the guarantee formalized in this patch. It is worth noting that the same conclusion does _not_ extend to other locking primiti- ves (e.g., write_{lock,unlock}()), AFAICT: c.f., e.g., riscv. This "works" considered the callsites for the barrier, but yeah... --- tools/memory-model/linux-kernel.bell | 3 ++- tools/memory-model/linux-kernel.cat | 4 +++- tools/memory-model/linux-kernel.def | 1 + 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell index b84fb2f67109e..796513362c052 100644 --- a/tools/memory-model/linux-kernel.bell +++ b/tools/memory-model/linux-kernel.bell @@ -29,7 +29,8 @@ enum Barriers = 'wmb (*smp_wmb*) || 'sync-rcu (*synchronize_rcu*) || 'before-atomic (*smp_mb__before_atomic*) || 'after-atomic (*smp_mb__after_atomic*) || - 'after-spinlock (*smp_mb__after_spinlock*) + 'after-spinlock (*smp_mb__after_spinlock*) || + 'after-unlock-lock (*smp_mb__after_unlock_lock*) instructions F[Barriers] (* Compute matching pairs of nested Rcu-lock and Rcu-unlock *) diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat index 882fc33274ac3..8f23c74a96fdc 100644 --- a/tools/memory-model/linux-kernel.cat +++ b/tools/memory-model/linux-kernel.cat @@ -30,7 +30,9 @@ let wmb = [W] ; fencerel(Wmb) ; [W] let mb = ([M] ; fencerel(Mb) ; [M]) | ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) | ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) | - ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) + ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) | + ([M] ; po ; [UL] ; (co | po) ; [LKW] ; + fencerel(After-unlock-lock) ; [M]) let gp = po ; [Sync-rcu] ; po? let strong-fence = mb | gp diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux-kernel.def index 6fa3eb28d40b5..b27911cc087d4 100644 --- a/tools/memory-model/linux-kernel.def +++ b/tools/memory-model/linux-kernel.def @@ -23,6 +23,7 @@ smp_wmb() { __fence{wmb}; } smp_mb__before_atomic() { __fence{before-atomic}; } smp_mb__after_atomic() { __fence{after-atomic}; } smp_mb__after_spinlock() { __fence{after-spinlock}; } +smp_mb__after_unlock_lock() { __fence{after-unlock-lock}; } // Exchange xchg(X,V) __xchg{mb}(X,V) -- 2.17.1