Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp3540112ybz; Mon, 4 May 2020 05:07:12 -0700 (PDT) X-Google-Smtp-Source: APiQypL2IxJXJ8VHFfH5RahkfpRgItrkuNWKVx6qtjCzNhJ4F/y7o97KKjEgDhBmLnj/Ga1ZlN0n X-Received: by 2002:a50:ee0e:: with SMTP id g14mr14497417eds.34.1588594032176; Mon, 04 May 2020 05:07:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588594032; cv=none; d=google.com; s=arc-20160816; b=VKylivt0xNc37kiz7r16jnRkfWGroxGZRFkwUmuW89wECayySHDKMaimeJRcIFzKHe jUI/y4h3GCGDzjr69w9w/vc6A1HSCnf39EXQWjs3wTiculMsP/hMYKPMOm2f0wpC6/ZA 8eAM/xcvs3iTrmgP05AV+euSOkULBcrNEbOaJwDQYjSNB8gUkTix774R7AKq1dHCMn4I y9I4AlopAzBV+Pq/liVXvJG0d6Y940pqQ0aDQxpwoFQG+7TuNDfAGRnBIMqy1Mm5n5OZ 2s38tVbwbnTAqbSAj0FLjoOW3L5/H/tihNVt7x7SnSmBeQcYAWHg+VUvC9Zrn7Iu29zh xb7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xcapwzPcsZdv7RSLxxp3TggpKWch67MSN39MnzPNw0Y=; b=rj1k2sxvS2btI3gXdJvftlJbfvX9Z8vaoTkC2/I0MvVHBSN41l21ee2D8o+cftMhUQ 8h/I79sI7e4ZVx39T1Ac0xEKJNcNsta1WqBAqWjyH3/OHlSFac5dhQEpIaT+1kINlgHP PP7ln/QPV1WwyaUdtOPSi+1E7Lr555xkepSWVzLRJ92zcibiK1AodHXJS3oUumZGIRZ0 lniZqcQRFUgNz+5sJbjqqgDdRJ5tvZiP8rmtZNOBPQmX1z+T9tdOmXixb2i3cmiRFyb2 UtW9kuiZaFwGXE32hQmp6aqNisBSTwoq2Gw9c2bTWyw0RcVkL6Sbwzfv+WwrsEtNLRqD 0YEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=MICFRYVK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x21si6708269ejw.432.2020.05.04.05.06.48; Mon, 04 May 2020 05:07:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=MICFRYVK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728706AbgEDMFT (ORCPT + 99 others); Mon, 4 May 2020 08:05:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726797AbgEDMFS (ORCPT ); Mon, 4 May 2020 08:05:18 -0400 Received: from mail-qt1-x844.google.com (mail-qt1-x844.google.com [IPv6:2607:f8b0:4864:20::844]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9961CC061A0F for ; Mon, 4 May 2020 05:05:18 -0700 (PDT) Received: by mail-qt1-x844.google.com with SMTP id h26so13643111qtu.8 for ; Mon, 04 May 2020 05:05:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xcapwzPcsZdv7RSLxxp3TggpKWch67MSN39MnzPNw0Y=; b=MICFRYVK6GtuotDzprAe+cVkJWuM5Hjx/F4lDbO/M9kYdGwQv+wg131E+o86m/IdEL zv6czefK0PY0c3+byltma+sL0iJHgyFPZsM+cABRRVosp2roOtOrSH70LkO4nPKaMzZ6 xOZ3BcqaZoAcpCy6PpcOrImDmi04ScPKXmVX0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xcapwzPcsZdv7RSLxxp3TggpKWch67MSN39MnzPNw0Y=; b=biKErUnj4HVgCAoPM5qyJEPowGmKS8K+iealnbpqL1O+s7w3yhwPjHkM0SNvPiwQO+ kodk60Y70s/zbgfkGX0APkgo8pJb7yk7QM8WHCAxiQvowFSf3PWlCW36QY0azU9li/yQ Jh+uo7W4prwBk2sxe5lxE+ix+tJAPDBtR4+Hx/lZbfeMrkpB4qeRXF+Qs6pn1dJgl5jj eVxHmOT0xdaHMvwp7jVaH/c1Map5fN4WCvc2rA98kzlDC3mxJJ3XZKei/qZV84lcewYW /THrWinEluOAAplA6loDSwngRxs8RJ2kMqWJnNsACL/r+w6N/sW32WgyXUmbIUxY5bkV BQkg== X-Gm-Message-State: AGi0PubiMXc5LAO3uBvMG8zlEmcrUWJKMgLobhEfMgoz+KS7C4x2ja9T ZWrqUNkQrTrVCMONKkjtrYKX9c94TVg= X-Received: by 2002:ac8:4809:: with SMTP id g9mr16808013qtq.33.1588593917214; Mon, 04 May 2020 05:05:17 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:12:9c46:e0da:efbf:69cc]) by smtp.gmail.com with ESMTPSA id y50sm6194534qta.56.2020.05.04.05.05.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 May 2020 05:05:16 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Andy Lutomirski , Frederic Weisbecker , frextrite@gmail.com, Ingo Molnar , Josh Triplett , kernel-team@android.com, Lai Jiangshan , madhuparnabhowmik04@gmail.com, Mathieu Desnoyers , "Paul E. McKenney" , peterz@infradead.org, Petr Mladek , rcu@vger.kernel.org, rostedt@goodmis.org, tglx@linutronix.de, vpillai@digitalocean.com Subject: [PATCH v3 1/5] Revert b8c17e6664c4 ("rcu: Maintain special bits at bottom of ->dynticks counter") Date: Mon, 4 May 2020 08:05:01 -0400 Message-Id: <20200504120505.89351-2-joel@joelfernandes.org> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog In-Reply-To: <20200504120505.89351-1-joel@joelfernandes.org> References: <20200504120505.89351-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This code is unused and can be removed now. Revert was straightforward. Tested with rcutorture on all TREE configurations. Link: http://lore.kernel.org/r/CALCETrWNPOOdTrFabTDd=H7+wc6xJ9rJceg6OL1S0rTV5pfSsA@mail.gmail.com Suggested-by: Andy Lutomirski Signed-off-by: Joel Fernandes (Google) --- include/linux/rcutiny.h | 3 -- kernel/rcu/tree.c | 93 +++++++++++------------------------------ 2 files changed, 24 insertions(+), 72 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 3465ba704a111..dbcddc7b26b94 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -14,9 +14,6 @@ #include /* for HZ */ -/* Never flag non-existent other CPUs! */ -static inline bool rcu_eqs_special_set(int cpu) { return false; } - static inline unsigned long get_state_synchronize_rcu(void) { return 0; diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 6d39485f7f517..1ec7b1d4a03c4 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -82,20 +82,10 @@ /* Data structures. */ -/* - * Steal a bit from the bottom of ->dynticks for idle entry/exit - * control. Initially this is for TLB flushing. - */ -#define RCU_DYNTICK_CTRL_MASK 0x1 -#define RCU_DYNTICK_CTRL_CTR (RCU_DYNTICK_CTRL_MASK + 1) -#ifndef rcu_eqs_special_exit -#define rcu_eqs_special_exit() do { } while (0) -#endif - static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = { .dynticks_nesting = 1, .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, - .dynticks = ATOMIC_INIT(RCU_DYNTICK_CTRL_CTR), + .dynticks = ATOMIC_INIT(1), }; static struct rcu_state rcu_state = { .level = { &rcu_state.node[0] }, @@ -245,21 +235,18 @@ void rcu_softirq_qs(void) static void rcu_dynticks_eqs_enter(void) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); - int seq; + int special; + + rcu_dynticks_task_trace_enter(); // Before ->dynticks update! /* - * CPUs seeing atomic_add_return() must see prior RCU read-side + * CPUs seeing atomic_inc_return() must see prior RCU read-side * critical sections, and we also must force ordering with the * next idle sojourn. */ - rcu_dynticks_task_trace_enter(); // Before ->dynticks update! - seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks); + special = atomic_inc_return(&rdp->dynticks); // RCU is no longer watching. Better be in extended quiescent state! - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && - (seq & RCU_DYNTICK_CTRL_CTR)); - /* Better not have special action (TLB flush) pending! */ - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && - (seq & RCU_DYNTICK_CTRL_MASK)); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && special & 0x1); } /* @@ -270,24 +257,18 @@ static void rcu_dynticks_eqs_enter(void) static void rcu_dynticks_eqs_exit(void) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); - int seq; + int special; /* - * CPUs seeing atomic_add_return() must see prior idle sojourns, + * CPUs seeing atomic_inc_return() must see prior idle sojourns, * and we also must force ordering with the next RCU read-side * critical section. */ - seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks); + special = atomic_inc_return(&rdp->dynticks); // RCU is now watching. Better not be in an extended quiescent state! + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(special & 0x1)); + rcu_dynticks_task_trace_exit(); // After ->dynticks update! - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && - !(seq & RCU_DYNTICK_CTRL_CTR)); - if (seq & RCU_DYNTICK_CTRL_MASK) { - atomic_andnot(RCU_DYNTICK_CTRL_MASK, &rdp->dynticks); - smp_mb__after_atomic(); /* _exit after clearing mask. */ - /* Prefer duplicate flushes to losing a flush. */ - rcu_eqs_special_exit(); - } } /* @@ -304,9 +285,9 @@ static void rcu_dynticks_eqs_online(void) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); - if (atomic_read(&rdp->dynticks) & RCU_DYNTICK_CTRL_CTR) + if (atomic_read(&rdp->dynticks) & 0x1) return; - atomic_add(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks); + atomic_add(0x1, &rdp->dynticks); } /* @@ -318,7 +299,7 @@ static bool rcu_dynticks_curr_cpu_in_eqs(void) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); - return !(atomic_read(&rdp->dynticks) & RCU_DYNTICK_CTRL_CTR); + return !(atomic_read(&rdp->dynticks) & 0x1); } /* @@ -329,7 +310,7 @@ static int rcu_dynticks_snap(struct rcu_data *rdp) { int snap = atomic_add_return(0, &rdp->dynticks); - return snap & ~RCU_DYNTICK_CTRL_MASK; + return snap; } /* @@ -338,7 +319,7 @@ static int rcu_dynticks_snap(struct rcu_data *rdp) */ static bool rcu_dynticks_in_eqs(int snap) { - return !(snap & RCU_DYNTICK_CTRL_CTR); + return !(snap & 0x1); } /* @@ -361,8 +342,7 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) int snap; // If not quiescent, force back to earlier extended quiescent state. - snap = atomic_read(&rdp->dynticks) & ~(RCU_DYNTICK_CTRL_MASK | - RCU_DYNTICK_CTRL_CTR); + snap = atomic_read(&rdp->dynticks) & ~(0x1); smp_rmb(); // Order ->dynticks and *vp reads. if (READ_ONCE(*vp)) @@ -370,32 +350,7 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) smp_rmb(); // Order *vp read and ->dynticks re-read. // If still in the same extended quiescent state, we are good! - return snap == (atomic_read(&rdp->dynticks) & ~RCU_DYNTICK_CTRL_MASK); -} - -/* - * Set the special (bottom) bit of the specified CPU so that it - * will take special action (such as flushing its TLB) on the - * next exit from an extended quiescent state. Returns true if - * the bit was successfully set, or false if the CPU was not in - * an extended quiescent state. - */ -bool rcu_eqs_special_set(int cpu) -{ - int old; - int new; - int new_old; - struct rcu_data *rdp = &per_cpu(rcu_data, cpu); - - new_old = atomic_read(&rdp->dynticks); - do { - old = new_old; - if (old & RCU_DYNTICK_CTRL_CTR) - return false; - new = old | RCU_DYNTICK_CTRL_MASK; - new_old = atomic_cmpxchg(&rdp->dynticks, old, new); - } while (new_old != old); - return true; + return snap == atomic_read(&rdp->dynticks); } /* @@ -411,13 +366,13 @@ bool rcu_eqs_special_set(int cpu) */ void rcu_momentary_dyntick_idle(void) { - int special; + struct rcu_data *rdp = this_cpu_ptr(&rcu_data); + int special = atomic_add_return(2, &rdp->dynticks); - raw_cpu_write(rcu_data.rcu_need_heavy_qs, false); - special = atomic_add_return(2 * RCU_DYNTICK_CTRL_CTR, - &this_cpu_ptr(&rcu_data)->dynticks); /* It is illegal to call this from idle state. */ - WARN_ON_ONCE(!(special & RCU_DYNTICK_CTRL_CTR)); + WARN_ON_ONCE(!(special & 0x1)); + + raw_cpu_write(rcu_data.rcu_need_heavy_qs, false); rcu_preempt_deferred_qs(current); } EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle); -- 2.26.2.526.g744177e7f7-goog