Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp5512111ybi; Tue, 4 Jun 2019 07:46:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqxQY8u3wde3r4fqbGsJPDIeHtysVV/IxOJqbG0j1Vh1CWBeTHKVVrVnQzG4sOq3CKP8HMSa X-Received: by 2002:a65:6559:: with SMTP id a25mr35812669pgw.33.1559659567499; Tue, 04 Jun 2019 07:46:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559659567; cv=none; d=google.com; s=arc-20160816; b=wYpqrdy1t8DGUglaHd3T9iLKylKKxsLPlytgYVqCZLkJzPR4B1yzAces64rxu03NEM YB1MpJMYlhqMVvDbO77ZkfKrfLWewy99yipC8lPzdbGk1z01RJASkOOznmpwSY+TW4Op E/e2YV3ix1NcuwBgJKSloB0+ML+Y886MoM63vuXrXDvX9UQvTtwctdct0S4cSQF/1b4r BipCMCfaNZ5BjAlfxFhE+1cQUpMwYx1ca8Xr6MzUYw9dGwu/8rpEm4sNiFIkyFfLDxg6 zIc2Tk1c4EAakWHpa5f6F9HVyEXXG5XouwkKxc/RGrDobkJv+gBDxg/uWev2I43uQTRl wGJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:in-reply-to :subject:cc:to:from:date; bh=JPGID0g3/vElzTaAT+A1txjU6IO7kbr5Ti0Gvp42Eq0=; b=u4Iu2SVFL63pcRILuclUYBj/7nReQAuU97/vKGs18ATlFF7gkyo+DSn9Vwm4nSi6hx Xf2tKfA9mmYeKVCm/CZy88v0tPNh9IUb+nz8C+AFCZiEYU+HpHf8UO1YI+LO/NEAcJrE 9OmKuELRbHB5NibQfa/2+vWbCvV34L3yqQsy1IqPTr/386YIojuJ4QoIYgx60oIOG8Hj y++uvenJ+EdJpIqLyxieTD7tO4LG4XFXs2S4vOK8ye6kjrtGPdvvaZs7HClaRqynCS4S Hndlb5CxYFW3F1vmkg5U9kutbqEMJjKQR2/4zl5hYzwwyrgLO1RJ1Yjp/iVX7Yz9vixZ Wc6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si22847640pjr.100.2019.06.04.07.45.49; Tue, 04 Jun 2019 07:46:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727789AbfFDOoT (ORCPT + 99 others); Tue, 4 Jun 2019 10:44:19 -0400 Received: from iolanthe.rowland.org ([192.131.102.54]:49322 "HELO iolanthe.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1727182AbfFDOoT (ORCPT ); Tue, 4 Jun 2019 10:44:19 -0400 Received: (qmail 3471 invoked by uid 2102); 4 Jun 2019 10:44:18 -0400 Received: from localhost (sendmail-bs@127.0.0.1) by localhost with SMTP; 4 Jun 2019 10:44:18 -0400 Date: Tue, 4 Jun 2019 10:44:18 -0400 (EDT) From: Alan Stern X-X-Sender: stern@iolanthe.rowland.org To: "Paul E. McKenney" cc: Boqun Feng , Herbert Xu , Linus Torvalds , Frederic Weisbecker , Fengguang Wu , LKP , LKML , Netdev , "David S. Miller" , Andrea Parri , Luc Maranget , Jade Alglave Subject: Re: rcu_read_lock lost its compiler barrier In-Reply-To: <20190603200301.GM28207@linux.ibm.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 3 Jun 2019, Paul E. McKenney wrote: > On Mon, Jun 03, 2019 at 02:42:00PM +0800, Boqun Feng wrote: > > On Mon, Jun 03, 2019 at 01:26:26PM +0800, Herbert Xu wrote: > > > On Sun, Jun 02, 2019 at 08:47:07PM -0700, Paul E. McKenney wrote: > > > > > > > > 1. These guarantees are of full memory barriers, -not- compiler > > > > barriers. > > > > > > What I'm saying is that wherever they are, they must come with > > > compiler barriers. I'm not aware of any synchronisation mechanism > > > in the kernel that gives a memory barrier without a compiler barrier. > > > > > > > 2. These rules don't say exactly where these full memory barriers > > > > go. SRCU is at one extreme, placing those full barriers in > > > > srcu_read_lock() and srcu_read_unlock(), and !PREEMPT Tree RCU > > > > at the other, placing these barriers entirely within the callback > > > > queueing/invocation, grace-period computation, and the scheduler. > > > > Preemptible Tree RCU is in the middle, with rcu_read_unlock() > > > > sometimes including a full memory barrier, but other times with > > > > the full memory barrier being confined as it is with !PREEMPT > > > > Tree RCU. > > > > > > The rules do say that the (full) memory barrier must precede any > > > RCU read-side that occur after the synchronize_rcu and after the > > > end of any RCU read-side that occur before the synchronize_rcu. > > > > > > All I'm arguing is that wherever that full mb is, as long as it > > > also carries with it a barrier() (which it must do if it's done > > > using an existing kernel mb/locking primitive), then we're fine. > > > > > > > Interleaving and inserting full memory barriers as per the rules above: > > > > > > > > CPU1: WRITE_ONCE(a, 1) > > > > CPU1: synchronize_rcu > > > > /* Could put a full memory barrier here, but it wouldn't help. */ > > > > > > CPU1: smp_mb(); > > > CPU2: smp_mb(); > > > > > > Let's put them in because I think they are critical. smp_mb() also > > > carries with it a barrier(). > > > > > > > CPU2: rcu_read_lock(); > > > > CPU1: b = 2; > > > > CPU2: if (READ_ONCE(a) == 0) > > > > CPU2: if (b != 1) /* Weakly ordered CPU moved this up! */ > > > > CPU2: b = 1; > > > > CPU2: rcu_read_unlock > > > > > > > > In fact, CPU2's load from b might be moved up to race with CPU1's store, > > > > which (I believe) is why the model complains in this case. > > > > > > Let's put aside my doubt over how we're even allowing a compiler > > > to turn > > > > > > b = 1 > > > > > > into > > > > > > if (b != 1) > > > b = 1 Even if you don't think the compiler will ever do this, the C standard gives compilers the right to invent read accesses if a plain (i.e., non-atomic and non-volatile) write is present. The Linux Kernel Memory Model has to assume that compilers will sometimes do this, even if it doesn't take the exact form of checking a variable's value before writing to it. (Incidentally, regardless of whether the compiler will ever do this, I have seen examples in the kernel where people did exactly this manually, in order to avoid dirtying a cache line unnecessarily.) > > > Since you seem to be assuming that (a == 0) is true in this case > > > > I think Paul's example assuming (a == 0) is false, and maybe > > Yes, otherwise, P0()'s write to "b" cannot have happened. > > > speculative writes (by compilers) needs to added into consideration? On the other hand, the C standard does not allow compilers to add speculative writes. The LKMM assumes they will never occur. > I would instead call it the compiler eliminating needless writes > by inventing reads -- if the variable already has the correct value, > no write happens. So no compiler speculation. > > However, it is difficult to create a solid defensible example. Yes, > from LKMM's viewpoint, the weakly reordered invented read from "b" > can be concurrent with P0()'s write to "b", but in that case the value > loaded would have to manage to be equal to 1 for anything bad to happen. > This does feel wrong to me, but again, it is difficult to create a solid > defensible example. > > > Please consider the following case (I add a few smp_mb()s), the case may > > be a little bit crasy, you have been warned ;-) > > > > CPU1: WRITE_ONCE(a, 1) > > CPU1: synchronize_rcu called > > > > CPU1: smp_mb(); /* let assume there is one here */ > > > > CPU2: rcu_read_lock(); > > CPU2: smp_mb(); /* let assume there is one here */ > > > > /* "if (b != 1) b = 1" reordered */ > > CPU2: r0 = b; /* if (b != 1) reordered here, r0 == 0 */ > > CPU2: if (r0 != 1) /* true */ > > CPU2: b = 1; /* b == 1 now, this is a speculative write > > by compiler > > */ > > > > CPU1: b = 2; /* b == 2 */ > > > > CPU2: if (READ_ONCE(a) == 0) /* false */ > > CPU2: ... > > CPU2 else /* undo the speculative write */ > > CPU2: b = r0; /* b == 0 */ > > > > CPU2: smp_mb(); > > CPU2: read_read_unlock(); > > > > I know this is too crasy for us to think a compiler like this, but this > > might be the reason why the model complain about this. > > > > Paul, did I get this right? Or you mean something else? > > Mostly there, except that I am not yet desperate enough to appeal to > compilers speculating stores. ;-) This example really does point out a weakness in the LKMM's handling of data races. Herbert's litmus test is a great starting point: C xu {} P0(int *a, int *b) { WRITE_ONCE(*a, 1); synchronize_rcu(); *b = 2; } P1(int *a, int *b) { rcu_read_lock(); if (READ_ONCE(*a) == 0) *b = 1; rcu_read_unlock(); } exists (~b=2) Currently the LKMM says the test is allowed and there is a data race, but this answer clearly is wrong since it would violate the RCU guarantee. The problem is that the LKMM currently requires all ordering/visibility of plain accesses to be mediated by marked accesses. But in this case, the visibility is mediated by RCU. Technically, we need to add a relation like ([M] ; po ; rcu-fence ; po ; [M]) into the definitions of ww-vis, wr-vis, and rw-xbstar. Doing so changes the litmus test's result to "not allowed" and no data race. However, I'm not certain that this single change is the entire fix; more thought is needed. Alan