Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp3647493pxb; Mon, 9 Nov 2020 17:31:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJyZKHOxyJc3K0h+m7aU5/z3dqHVYZAMAXeyRVC9bblZEcdWmDCjJcxs2Sxq5kOvdEqg5R71 X-Received: by 2002:a17:906:b852:: with SMTP id ga18mr18610521ejb.80.1604971884010; Mon, 09 Nov 2020 17:31:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604971884; cv=none; d=google.com; s=arc-20160816; b=C+uB631nWd3a72xiSq7ha3mAv047huMKjNzBGUBZgoYSHIRkDTGciUpobynshD46DD Xyhpmh6G4yuoRcUvRpo/YewC1i1DKD/dSCHttlCx0mAfH0Lglzl3CFDxEOqxK4TJEWIl KVYoFamuh/INIMlDdNEDfwAa55iLTA3LyA6UlL9YTgykKjsN63GOushUIyd1MfLVxVCT q7AHmVn4Ms+GLGOzcSLXYhMiTgf5hWzAYf3Hi8Gj1+sEGO4IVBHuf1Ycd9KUMYuDrVhI R1TCjUZCf9Km3jQJXWSMin2jGt07gWS4P6eEvxNgYhGMZR4u65f/DegfqMXA0LsDMxcI jWLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=RzhuYnlFim5RcXdI+U2KLew64H1H92UevNn1D0lFRKY=; b=ZbiNAXURLTOZA7RN8NSEtlFUe1tMHi1+Dy3fed84nGBqJdhYeBYkJtyq5YeiCL2mwA VAbGOkvYZ59lDOoPVi8uoPJlQWNiUtkjtc2KS1nxMsAGKjaugJbf5o5tMX4ScG50u+JO hB6DD+PzD8BiHaUimFdgilDJX/OTw9YCNVav3SYc/s2ZF6jcz6N/z6gynHj6WCHgXuy/ 96Do1pf7NP5w5dSPtdG5Vquckrq5OEzX5qevizC7XTRccO8cSdCaRxHNdD7vo92jOLYz CN81cDnGak2rXGXQWjEzIQn9kZ8I48DRogULItVs1JLdQEU/iC1lsUcPMn48y4CeHjkR 8nXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=1HrlVwGV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id no2si8198785ejb.422.2020.11.09.17.31.00; Mon, 09 Nov 2020 17:31:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=1HrlVwGV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730733AbgKJB2o (ORCPT + 99 others); Mon, 9 Nov 2020 20:28:44 -0500 Received: from mail.kernel.org ([198.145.29.99]:59870 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730437AbgKJB2o (ORCPT ); Mon, 9 Nov 2020 20:28:44 -0500 Received: from paulmck-ThinkPad-P72.home (50-39-104-11.bvtn.or.frontiernet.net [50.39.104.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 15887206ED; Tue, 10 Nov 2020 01:28:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604971723; bh=Cj5UESumFQcSyAyespotrFEPzJORIVqLusRu3aTFlxM=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=1HrlVwGVqlvAhUyfLKsOVlm0C0gkg4WoTnDKRu8OoautQe4KOtd899TGLd5Tryo9O ycFsnlrtNT4YsDCa7gkALnkT4e3/1A9WZfcvwtZOVxUfwVSdUf5UAkFovV1Vv1bDC8 XE6opUjVVtDd6ikCaOT14Y0HrPc6skz+bdssuCiU= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id BD18135225E9; Mon, 9 Nov 2020 17:28:42 -0800 (PST) Date: Mon, 9 Nov 2020 17:28:42 -0800 From: "Paul E. McKenney" To: Joel Fernandes Cc: linux-kernel@vger.kernel.org, Josh Triplett , Lai Jiangshan , Marco Elver , Mathieu Desnoyers , rcu@vger.kernel.org, Steven Rostedt , "Uladzislau Rezki (Sony)" , fweisbec@gmail.com, neeraj.iitr10@gmail.com Subject: Re: [PATCH v9 7/7] rcu/segcblist: Add additional comments to explain smp_mb() Message-ID: <20201110012842.GO3249@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20201103142603.1302207-1-joel@joelfernandes.org> <20201103142603.1302207-8-joel@joelfernandes.org> <20201105185551.GO3249@paulmck-ThinkPad-P72> <20201106224141.GA1397669@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201106224141.GA1397669@google.com> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 06, 2020 at 05:41:41PM -0500, Joel Fernandes wrote: > On Thu, Nov 05, 2020 at 10:55:51AM -0800, Paul E. McKenney wrote: > > On Tue, Nov 03, 2020 at 09:26:03AM -0500, Joel Fernandes (Google) wrote: > > > Memory barriers are needed when updating the full length of the > > > segcblist, however it is not fully clearly why one is needed before and > > > after. This patch therefore adds additional comments to the function > > > header to explain it. > > > > > > Signed-off-by: Joel Fernandes (Google) > > > > Looks good, thank you! As always, I could not resist the urge to > > do a bit of wordsmithing, so that the queued commit is as shown > > below. Please let me know if I messed anything up. > > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > commit 7dac7adefcae7558b3a85a16f51186d621623733 > > Author: Joel Fernandes (Google) > > Date: Tue Nov 3 09:26:03 2020 -0500 > > > > rcu/segcblist: Add additional comments to explain smp_mb() > > > > One counter-intuitive property of RCU is the fact that full memory > > barriers are needed both before and after updates to the full > > (non-segmented) length. This patch therefore helps to assist the > > reader's intuition by adding appropriate comments. > > > > [ paulmck: Wordsmithing. ] > > Signed-off-by: Joel Fernandes (Google) > > Signed-off-by: Paul E. McKenney > > > > diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c > > index bb246d8..b6dda7c 100644 > > --- a/kernel/rcu/rcu_segcblist.c > > +++ b/kernel/rcu/rcu_segcblist.c > > @@ -94,17 +94,77 @@ static void rcu_segcblist_set_len(struct rcu_segcblist *rsclp, long v) > > * field to disagree with the actual number of callbacks on the structure. > > * This increase is fully ordered with respect to the callers accesses > > * both before and after. > > + * > > + * So why on earth is a memory barrier required both before and after > > + * the update to the ->len field??? > > + * > > + * The reason is that rcu_barrier() locklessly samples each CPU's ->len > > + * field, and if a given CPU's field is zero, avoids IPIing that CPU. > > + * This can of course race with both queuing and invoking of callbacks. > > + * Failng to correctly handle either of these races could result in > > + * rcu_barrier() failing to IPI a CPU that actually had callbacks queued > > + * which rcu_barrier() was obligated to wait on. And if rcu_barrier() > > + * failed to wait on such a callback, unloading certain kernel modules > > + * would result in calls to functions whose code was no longer present in > > + * the kernel, for but one example. > > + * > > + * Therefore, ->len transitions from 1->0 and 0->1 have to be carefully > > + * ordered with respect with both list modifications and the rcu_barrier(). > > + * > > + * The queuing case is CASE 1 and the invoking case is CASE 2. > > + * > > + * CASE 1: Suppose that CPU 0 has no callbacks queued, but invokes > > + * call_rcu() just as CPU 1 invokes rcu_barrier(). CPU 0's ->len field > > + * will transition from 0->1, which is one of the transitions that must be > > + * handled carefully. Without the full memory barriers before the ->len > > + * update and at the beginning of rcu_barrier(), the following could happen: > > + * > > + * CPU 0 CPU 1 > > + * > > + * call_rcu(). > > + * rcu_barrier() sees ->len as 0. > > + * set ->len = 1. > > + * rcu_barrier() does nothing. > > + * module is unloaded. > > + * callback invokes unloaded function! > > + * > > + * With the full barriers, any case where rcu_barrier() sees ->len as 0 will > > + * have unambiguously preceded the return from the racing call_rcu(), which > > + * means that this call_rcu() invocation is OK to not wait on. After all, > > + * you are supposed to make sure that any problematic call_rcu() invocations > > + * happen before the rcu_barrier(). > > Unfortunately, I did not understand your explanation. To me the barrier > *before* the setting of length is needed on CPU0 only for 1->0 transition > (Dequeue). Where as in > your example above, it is for enqueue. > > This was case 1 in my patch: > > + * To illustrate the problematic scenario to avoid: > + * P0 (what P1 sees) P1 > + * set len = 0 > + * rcu_barrier sees len as 0 > + * dequeue from list > + * rcu_barrier does nothing. > + * > > > Here, P1 should see the transition of 1->0 *after* the CB is dequeued. Which > means you needed a memory barrier *before* the setting of len from 1->0 and > *after* the dequeue. IOW, rcu_barrier should 'see' the memory ordering as: > > 1. dequeue > 2. set len from 1 -> 0. > > For the enqueue case, it is the reverse, rcu_barrier should see: > 1. set len from 0 -> 1 > 2. enqueue > > Either way, the point I think I was trying to make is that the length should > always be seen as non-zero if the list is non-empty. Basically, the > rcu_barrier() should always not do the fast-path if the list is non-empty. > Worst-case it might do the slow-path when it is not necessary, but it should > never do the fast-path when it was not supposed to. > > Thoughts? Right you are! I reversed the before/after associated with ->len. I will fix this. Thanx, Paul > thanks, > > - Joel > > > > > + * > > + * > > + * CASE 2: Suppose that CPU 0 is invoking its last callback just as CPU 1 invokes > > + * rcu_barrier(). CPU 0's ->len field will transition from 1->0, which is one > > + * of the transitions that must be handled carefully. Without the full memory > > + * barriers after the ->len update and at the end of rcu_barrier(), the following > > + * could happen: > > + * > > + * CPU 0 CPU 1 > > + * > > + * start invoking last callback > > + * set ->len = 0 (reordered) > > + * rcu_barrier() sees ->len as 0 > > + * rcu_barrier() does nothing. > > + * module is unloaded > > + * callback executing after unloaded! > > + * > > + * With the full barriers, any case where rcu_barrier() sees ->len as 0 > > + * will be fully ordered after the completion of the callback function, > > + * so that the module unloading operation is completely safe. > > + * > > */ > > void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v) > > { > > #ifdef CONFIG_RCU_NOCB_CPU > > - smp_mb__before_atomic(); /* Up to the caller! */ > > + smp_mb__before_atomic(); // Read header comment above. > > atomic_long_add(v, &rsclp->len); > > - smp_mb__after_atomic(); /* Up to the caller! */ > > + smp_mb__after_atomic(); // Read header comment above. > > #else > > - smp_mb(); /* Up to the caller! */ > > + smp_mb(); // Read header comment above. > > WRITE_ONCE(rsclp->len, rsclp->len + v); > > - smp_mb(); /* Up to the caller! */ > > + smp_mb(); // Read header comment above. > > #endif > > } > >