Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp5536924imm; Wed, 12 Sep 2018 07:26:44 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZwjSoBsEM7ZyIlHIhcmDIrB28rICKBDNIzO5na/DxeMPGWm8pmtxyHohmzZkbtgnDAT3sR X-Received: by 2002:a17:902:934c:: with SMTP id g12-v6mr2580052plp.67.1536762404374; Wed, 12 Sep 2018 07:26:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536762404; cv=none; d=google.com; s=arc-20160816; b=ERWF2feleOqIJWhm40DYz9IGolj5PQuZpiqiu5hc5+vG8rObj/AKBEXAc/W6ggSZYy CLKwwW+QG7h1PRO0Ma3D79vDwn1mUdArLplfG2OtSq1kl5UhTERHfxNypFmjOOm8+VHd wf5WE/rXMe1+DHZO/UOqctviUpx8w3wfM+L2aYBjZQflLPRydn18QTrMvTw0c2DAy2jd GhmaSXTzaebmerbOQdGNhGsoFXoGI60erpqJ1dIMa+wqkGInr4qKPVoQngcjvb64uYAE 3VtDn5R+vAkkA/NNplz/+3muxKIS/Vhug5oHrHeZ6yBuJC0RptxLaJeS8jakNQfugPpl V0kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:in-reply-to:subject:cc:to:from:date; bh=PMCaH4cjCCKSkaGpzbrp6PVLHnv1vO3JVhajdDzgChs=; b=0/YSCDLH6WVu+9gzDLVCd/cosMRbPA+8aKafvOG8qwIOQ59LK4qSPRFMwjx5xrCVWs EdVj4CjI6leUK4iWVl9lCjwy49koRkj2QdRoGy3fuY8ajsi9AdE7qwnAzkByxmpWlltN rKXUsoK4pYg4uGkOvb/aHE4W9LDOVNh0VUH8xciPtDaAmlvCWHuJGjsFdee1LGBoRrmV TWD9222IEHDyVJXAuDyH4S/H+g6ksbNzal7mYNaZOMEi9PqhGnHwpTOfXcN/r1bzfV6y JMOl1GWtinRzvL6zhfNjYIam43EnUbDUel9OeV7VCAbLxn7Ryt1Ux6X8ZRi3Lont5ESU d4Lg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b98-v6si1302990plb.38.2018.09.12.07.26.29; Wed, 12 Sep 2018 07:26:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbeILT3W (ORCPT + 99 others); Wed, 12 Sep 2018 15:29:22 -0400 Received: from iolanthe.rowland.org ([192.131.102.54]:52162 "HELO iolanthe.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1726672AbeILT3W (ORCPT ); Wed, 12 Sep 2018 15:29:22 -0400 Received: (qmail 2376 invoked by uid 2102); 12 Sep 2018 10:24:37 -0400 Received: from localhost (sendmail-bs@127.0.0.1) by localhost with SMTP; 12 Sep 2018 10:24:37 -0400 Date: Wed, 12 Sep 2018 10:24:37 -0400 (EDT) From: Alan Stern X-X-Sender: stern@iolanthe.rowland.org To: "Paul E. McKenney" cc: Daniel Lustig , Will Deacon , Andrea Parri , Andrea Parri , Kernel development list , , , , , , , Jade Alglave , Luc Maranget , , Palmer Dabbelt Subject: Re: [PATCH RFC LKMM 1/7] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire In-Reply-To: <20180911200328.GA4225@linux.vnet.ibm.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 11 Sep 2018, Paul E. McKenney wrote: > > I think what you meant to write in the second and third sentences was > > something more like this: > > > > Any code in an RCU critical section that extends beyond the > > end of a given RCU grace period is guaranteed to see the > > effects of all accesses which were visible to the grace > > period's CPU before the start of the grace period. Similarly, > > any code that follows an RCU grace period (on the grace > > period's CPU) is guaranteed to see the effects of all accesses > > which were visible to an RCU critical section that began > > before the start of the grace period. > > That looks to me to be an improvement, other than that the "(on the > grace period's CPU)" seems a bit restrictive -- you could for example > have a release-acquire chain starting after the grace period, right? The restriction was deliberate. Without the restriction people might think it applies to any code that happens after a grace period, which is not true (depending on what you mean by "happens after"). The business about a release-acquire chain is derivable from what I wrote. Since the chain starts on the grace period's CPU, the effect is guaranteed to be visible there. Then since release-acquire chains preserve visibility, the effect is guaranteed to be visible to the code at the end of the chain. > > Also, the document doesn't seem to explain how Tree RCU relies on the > > lock-ordering guarantees of raw_spin_lock_rcu_node() and friends. It > > _says_ that these guarantees are used, but not how or where. (Unless I > > missed something; I didn't read the document all that carefully.) > > The closest is this sentence: "But the only part of rcu_prepare_for_idle() > that really matters for this discussion are lines 37–39", which > refers to this code: > > 37 raw_spin_lock_rcu_node(rnp); > 38 needwake = rcu_accelerate_cbs(rsp, rnp, rdp); > 39 raw_spin_unlock_rcu_node(rnp); > > I could add a sentence explaining the importance of the > smp_mb__after_unlock_lock() -- is that what you are getting at? What I was really asking is for the document to explain what could go wrong if the smp_mb__after_unlock_lock() call was omitted from raw_spin_lock_rcu_node(). What assumptions or requirements in the Tree RCU code might fail, and how/where in the code are these assumptions or requirements used? > > In any case, you should bear in mind that the lock ordering provided by > > Peter's raw_spin_lock_rcu_node() and friends is not the same as what we > > have been discussing for the LKMM: > > > > Peter's routines are meant for the case where you release > > one lock and then acquire another (for example, locks in > > two different levels of the RCU tree). > > > > The LKMM patch applies only to cases where one CPU releases > > a lock and then that CPU or another acquires the _same_ lock > > again. > > > > As another difference, the litmus test given near the start of the > > "Tree RCU Grace Period Memory Ordering Building Blocks" section would > > not be forbidden by the LKMM, even with RCtso locks, if it didn't use > > raw_spin_lock_rcu_node(). This is because the litmus test is forbidden > > only when locks are RCsc, which is what raw_spin_lock_rcu_node() > > provides. > > Agreed. > > > So I don't see how the RCU code can be held up as an example either for > > or against requiring locks to be RCtso. > > Agreed again. The use of smp_mb__after_unlock_lock() instead > provides RCsc. But this use case is deemed sufficiently rare that > smp_mb__after_unlock_lock() is defined within RCU. So let me repeat the original question which started this train of thought: Can anybody point to any actual code in the kernel that relies on locks being RCtso, or that could be simplified if locks were guaranteed to be RCtso? There have been claims that such code exists somewhere in RCU and the scheduler. Where is it? Alan