Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp579707pxu; Thu, 15 Oct 2020 11:02:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwR4TkKxH9FZ0U+ChQRJaIdy9IVr6p5H7cBASY+pkdDIPP8nX1Uqt3Zr29yEWF+ycs0Cjlf X-Received: by 2002:a7b:c18d:: with SMTP id y13mr37081wmi.120.1602784953790; Thu, 15 Oct 2020 11:02:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602784953; cv=none; d=google.com; s=arc-20160816; b=0fddemKYX9ar19/0Ma01LIfbFg5H6nJyXWdT3Hnc08nUMJFDnAOZVEiCY7ZYLpg6ej Z9HU97zPAct6WDKLLCOODdFQatX5A8bfu1/IKY/eK5ZR9SqR87+gj31GVbCnsmk2flTp my/i6olSHhxyXjH4XgaNTi2BlymkIbl/PYr30TSGXQ6df7dNiA1ZDUDuYbfNh9KP1kxN ghP/LNgBdPUsHP5Voj8/ILXcCOtLe48pWHteIhw/2idX8vq0UDTV89X9Xc5ySvDgqFkU JBKkvIHo2qWcWN8URYiM6/DgHOd4JNlGNMo2kuUgisunqCrJ3KEylB1xPOz9ByMRP8el 1hcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=urBHxMO1ubL1kUmdfaN5hVloa/8bovJBVtU+Gxi3wCI=; b=V+Lg6/f6K+OZtTdbbACTCcpkMZhI2cww+oXxNtdlqfzH9pYfzTl+SVHRSO1lzychMw oDXUJvvD/aqtCqv+SBRx+VbpuUENFNAXttEXAo88x07plU3j+cSaLSosG+UcxpfnfD79 TfZ1A9h7tm3axRu4FGGrKkhb+Y5gJk2FuKGd/61fI+oo1oo5OWVJ6i2aQ0CZmTdEF6c5 9V9IiQX3nLZvaDtqdwaWuubYUfrTjkpKf05cx9mDH9hjR18LYyOEJZL5cco+iU2c4/OE a/QePpojFpCqfWCsmwvJcZLRSdPx+7OD7wwiFS6Cl+8J8Nh3nHx0mmoDfqjZQPelj08d bSHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=NeXLDmVq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a4si2589021eds.296.2020.10.15.11.01.58; Thu, 15 Oct 2020 11:02:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=NeXLDmVq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729749AbgJONfP (ORCPT + 99 others); Thu, 15 Oct 2020 09:35:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:40200 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729498AbgJONfO (ORCPT ); Thu, 15 Oct 2020 09:35:14 -0400 Received: from localhost (unknown [176.167.119.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 976232222B; Thu, 15 Oct 2020 13:35:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602768914; bh=iHp6TRkSwgAcFdZicoonLvGHsT3alscYX4RQ7OCWarE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=NeXLDmVqypHWmZtiOY4TK0MNY1lFwPABDAX0HuwIudPjBPCffF7y1XviIXM9wipIW 1XHTwqGYMwmB01nO7WKbLBHYeRJhK3Nv5KfDFbljav27dajlseQCiGjvC2AN39PiLy amVoKXTH10l5lCcdYevOmvt4j7GZo/Qf/cNUgMHE= Date: Thu, 15 Oct 2020 15:35:11 +0200 From: Frederic Weisbecker To: "Joel Fernandes (Google)" Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Josh Triplett , Lai Jiangshan , Marco Elver , Mathieu Desnoyers , "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt , "Uladzislau Rezki (Sony)" , fweisbec@gmail.com, neeraj.iitr10@gmail.com Subject: Re: [PATCH v7 6/6] rcu/segcblist: Add additional comments to explain smp_mb() Message-ID: <20201015133511.GB127222@lothringen> References: <20201015002301.101830-1-joel@joelfernandes.org> <20201015002301.101830-7-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201015002301.101830-7-joel@joelfernandes.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 14, 2020 at 08:23:01PM -0400, Joel Fernandes (Google) wrote: > Memory barriers are needed when updating the full length of the > segcblist, however it is not fully clearly why one is needed before and > after. This patch therefore adds additional comments to the function > header to explain it. > > Signed-off-by: Joel Fernandes (Google) > --- > kernel/rcu/rcu_segcblist.c | 38 ++++++++++++++++++++++++++++++++++---- > 1 file changed, 34 insertions(+), 4 deletions(-) > > diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c > index 271d5d9d7f60..25ffd07f9951 100644 > --- a/kernel/rcu/rcu_segcblist.c > +++ b/kernel/rcu/rcu_segcblist.c > @@ -147,17 +147,47 @@ static void rcu_segcblist_inc_seglen(struct rcu_segcblist *rsclp, int seg) > * field to disagree with the actual number of callbacks on the structure. > * This increase is fully ordered with respect to the callers accesses > * both before and after. > + * > + * About memory barriers: > + * There is a situation where rcu_barrier() locklessly samples the full > + * length of the segmented cblist before deciding what to do. That can > + * race with another path that calls this function. rcu_barrier() should > + * not wrongly assume there are no callbacks, so any transitions from 1->0 > + * and 0->1 have to be carefully ordered with respect to list modifications. > + * > + * Memory barrier is needed before adding to length, for the case where > + * v is negative which does not happen in current code, but used > + * to happen. Keep the memory barrier for robustness reasons. Heh, I seem to recongnize someone's decision's style ;-) > When/If the > + * length transitions from 1 -> 0, the write to 0 has to be ordered *after* > + * the memory accesses of the CBs that were dequeued and the segcblist > + * modifications: > + * P0 (what P1 sees) P1 > + * set len = 0 > + * rcu_barrier sees len as 0 > + * dequeue from list > + * rcu_barrier does nothing. It's a bit difficult to read that way. So that would be: rcu_do_batch() rcu_barrier() -- -- dequeue l = READ(len) smp_mb() if (!l) WRITE(len, 0) check next CPU... But I'm a bit confused against what it pairs in rcu_barrier(). > + * > + * Memory barrier is needed after adding to length for the case > + * where length transitions from 0 -> 1. This is because rcu_barrier() > + * should never miss an update to the length. So the update to length > + * has to be seen *before* any modifications to the segmented list. Otherwise a > + * race can happen. > + * P0 (what P1 sees) P1 > + * queue to list > + * rcu_barrier sees len as 0 > + * set len = 1. > + * rcu_barrier does nothing. So that would be: call_rcu() rcu_barrier() -- -- WRITE(len, len + 1) l = READ(len) smp_mb() if (!l) queue check next CPU... But I still don't see against what it pairs in rcu_barrier. Thanks.