Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp750625rdb; Fri, 8 Sep 2023 15:36:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG/MVjrBXJIZw9fHk6NKcltmXk4DcwwvdTW1cOyAi2KT7zczj2Dn4ahZgy2FU1XerL3PzNu X-Received: by 2002:a05:6a20:324a:b0:133:b3a9:90d with SMTP id hm10-20020a056a20324a00b00133b3a9090dmr3697589pzc.36.1694212578054; Fri, 08 Sep 2023 15:36:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694212578; cv=none; d=google.com; s=arc-20160816; b=sIVuESJgwAOKXRn9sxliqasR02xKupwMUhvtn4bRDCPnirUOOAC1GiBkY6wgBWwiZb iN7iw+6cpAkOCTzIFdyrRSyaO68SFJDExNiOUdRFHvZHKtSzrBj3Gyy1HnXsUe5NrXYA r8vFaXEYZQ+tlP4H1ndWl71m/7OxXKaERf4HjhqBiey9Yt8rnLWveTlFoRRbGjIYtRls jCdowdCk7hFwSlpXRtOG3dYgVFMhBrf/VA54VAl7JN8T1PchLVx/7IYnje8gs2/cb7rY /+3Uo8LrwMDh1GrIYuFal5Nw0fjDEAjLDtM+RofHPgcxXygRik6Yh3zw/3HLSRhtP2Bt Cfbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Lw7nh+LXgSMi1dNo/nbt7E096s4QKdiF2fxwdKN2QZ4=; fh=b/0dXqvz0kAo+07ch4KNz5dZ+JhJoOmBzkQIhQsuGPg=; b=Qy1VUy+5ZcP8uoe20W9B58Ffd+8fgftoGbgozIH/54FzpuY7SO2T3WPLDRVBdrMpYH IoAN4Xn6TjJnjD4pdhXpry16Bk2Gecesqy//d14OQc0xLvReHlVl4nK7LlyR2F2cYHJ0 n25doaX4MryDRb3Q1/Z2qnkroV4P0y3gfj5fszQgmGYl/UC0FM5exmzh9t1Yi/sxkc6+ 624a/5NTngXHW3KI5gx6O49hR5OGhyrjkClrfUXx5s4rzlyUOImFO+T074xjtonOqNcS w+F//0BV1Gf0CKi6TJ9P9h4g8iSKcqoctw27D8jInvh5DUWgXRt06cTs63excNmQhggU aUQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=cCmajFEM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i12-20020a170902c94c00b001b9d335180dsi2346000pla.618.2023.09.08.15.36.04; Fri, 08 Sep 2023 15:36:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=cCmajFEM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245468AbjIHUgb (ORCPT + 99 others); Fri, 8 Sep 2023 16:36:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344135AbjIHUg1 (ORCPT ); Fri, 8 Sep 2023 16:36:27 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 598E3C4; Fri, 8 Sep 2023 13:36:23 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18509C433B9; Fri, 8 Sep 2023 20:36:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694205383; bh=GUT92MKTD1zv1ELbTZztdEp1nZcVqJTBRlzdjoDETLs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cCmajFEMvoh8fTOLErYAwt+nF4hfVHNs6QQNNM/2Y8GtU1lnueDle9kgxUkMdxyby DOv1MXqFeu+ZgrAlsLMXbEGswqAyH9ONebKnlQ7L48/FEeGD5EctI+M7bZsHCWRPXy RRuE5dO1nT0G/fxa4WuI2+XtD1nMAWDglSuzgODLlxHwQ7p6eCQA1J9WQ1pZfz4ZUq vh1loZHLvNHYNxGyO8d1W4S7G0Z+GKq21flczenkRJZyt0obo0vypJ2mL6wtMRAYBx /sSjXr+DxA0M9ffpOqnb70Tuj01OKYMRPqqJw8j1dACnMf+MIF4fhnVttQVqFemWqP X+Y93EB2Ad+Eg== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Frederic Weisbecker , rcu , Uladzislau Rezki , Neeraj Upadhyay , Boqun Feng , Joel Fernandes Subject: [PATCH 04/10] rcu/nocb: Remove needless full barrier after callback advancing Date: Fri, 8 Sep 2023 22:35:57 +0200 Message-ID: <20230908203603.5865-5-frederic@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230908203603.5865-1-frederic@kernel.org> References: <20230908203603.5865-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A full barrier is issued from nocb_gp_wait() upon callbacks advancing to order grace period completion with callbacks execution. However these two events are already ordered by the smp_mb__after_unlock_lock() barrier within the call to raw_spin_lock_rcu_node() that is necessary for callbacks advancing to happen. The following litmus test shows the kind of guarantee that this barrier provides: C smp_mb__after_unlock_lock {} // rcu_gp_cleanup() P0(spinlock_t *rnp_lock, int *gpnum) { // Grace period cleanup increase gp sequence number spin_lock(rnp_lock); WRITE_ONCE(*gpnum, 1); spin_unlock(rnp_lock); } // nocb_gp_wait() P1(spinlock_t *rnp_lock, spinlock_t *nocb_lock, int *gpnum, int *cb_ready) { int r1; // Call rcu_advance_cbs() from nocb_gp_wait() spin_lock(nocb_lock); spin_lock(rnp_lock); smp_mb__after_unlock_lock(); r1 = READ_ONCE(*gpnum); WRITE_ONCE(*cb_ready, 1); spin_unlock(rnp_lock); spin_unlock(nocb_lock); } // nocb_cb_wait() P2(spinlock_t *nocb_lock, int *cb_ready, int *cb_executed) { int r2; // rcu_do_batch() -> rcu_segcblist_extract_done_cbs() spin_lock(nocb_lock); r2 = READ_ONCE(*cb_ready); spin_unlock(nocb_lock); // Actual callback execution WRITE_ONCE(*cb_executed, 1); } P3(int *cb_executed, int *gpnum) { int r3; WRITE_ONCE(*cb_executed, 2); smp_mb(); r3 = READ_ONCE(*gpnum); } exists (1:r1=1 /\ 2:r2=1 /\ cb_executed=2 /\ 3:r3=0) (* Bad outcome. *) Here the bad outcome only occurs if the smp_mb__after_unlock_lock() is removed. This barrier orders the grace period completion against callbacks advancing and even later callbacks invocation, thanks to the opportunistic propagation via the ->nocb_lock to nocb_cb_wait(). Therefore the smp_mb() placed after callbacks advancing can be safely removed. Signed-off-by: Frederic Weisbecker --- kernel/rcu/tree_nocb.h | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 6e63ba4788e1..2dc76f5e6e78 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -779,7 +779,6 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) if (rcu_segcblist_ready_cbs(&rdp->cblist)) { needwake = rdp->nocb_cb_sleep; WRITE_ONCE(rdp->nocb_cb_sleep, false); - smp_mb(); /* CB invocation -after- GP end. */ } else { needwake = false; } -- 2.41.0