Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2801701pxk; Sun, 20 Sep 2020 18:24:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz3eB6AAHullMcsCobCeT2KzmxEYVUzXGaWZ6dxfrF1QEacKDTSjn4kC5G9Lhzk/wojqzdI X-Received: by 2002:aa7:c347:: with SMTP id j7mr51838702edr.185.1600651476066; Sun, 20 Sep 2020 18:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600651476; cv=none; d=google.com; s=arc-20160816; b=mmFQEJGKl4MVumdNR8gR4RPSTjr+fFWCwCyKcPEnYQUQfYQ1bG0/TX1oQHRkOCKuGx IBP/hiFOukV1yIGgfXceA7p6E87cycA0qOhJP/DKHAPHmPVbxj9F9EztVINa+H0qk1BH jBlrkv9g5ZkuFBaoZ3h1NlZ0kaTPqrZJpXaIgKa7HH0lfT+NkAimrgBCzMxnGQDQ9RrI rZ1GRYp/qAGdggRVCLPPyZWMu3ICYKtOofMAWShRHBxcPtBY+Dau6hhN6+dPP9aP+h8U 8BebOBzDRgJgz4NKf9BGWts3jPrgxhvpw2u1lyVVDcperSmMFDZO1B8pr/XeoSYbMjmv 3+rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=v5j4t27+l0WYqDu/xeamzQpCWOqmDCckcWi8FrgNWBc=; b=OreFDm1IM0Tu+u0mp7nQw0E4k0Qi1vcGWUoIKzUdo/QlLlzEE/0/RKvTO4OWbk27Ka AXUPU7sWr9VA20CeFD2Agr6haFmL78iwAYHiOyfdGhhg31+47O+/q/HL5FIKQBy5vnnr 9Q9pz12w0Xn4ngQiHOrnx+RtXDj+D8g3dY8GKfmYdtMHlFugvyiQR64ut1b0OQqALAg+ 4XMNvmWUcByeXHqzJkO1i1HCImbkFch5Pe59yDeWj6jmJooFwLfXOJw34hdBb6tqUW7x JVwdmhKyH1nsHGJMQuRyFhWgOsVNjkRk2jOxzp9YOzSxIX/ScMtdjgZWg4clXppT3Bza nIag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=Oqyqx8mW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t15si7309264edw.359.2020.09.20.18.24.12; Sun, 20 Sep 2020 18:24:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=Oqyqx8mW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726406AbgIUBWB (ORCPT + 99 others); Sun, 20 Sep 2020 21:22:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726126AbgIUBWA (ORCPT ); Sun, 20 Sep 2020 21:22:00 -0400 Received: from mail-qt1-x843.google.com (mail-qt1-x843.google.com [IPv6:2607:f8b0:4864:20::843]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AB2BC061755 for ; Sun, 20 Sep 2020 18:22:00 -0700 (PDT) Received: by mail-qt1-x843.google.com with SMTP id z2so10959315qtv.12 for ; Sun, 20 Sep 2020 18:22:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v5j4t27+l0WYqDu/xeamzQpCWOqmDCckcWi8FrgNWBc=; b=Oqyqx8mWHbNshXcogwTSysiihfuni5qX4dnTmswgoJ9LV4oFXaJNcfCUGjj6E2XcXQ 8xAtvU05IVUqtXbS8FWTfiIziGTQ8e32g9iX36cb2G5iLYWby5DaA1aa0HB74fNJrkAH HNw8QdIm99W7tRljkCGkS8BsW7U4v2DvZlz8Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v5j4t27+l0WYqDu/xeamzQpCWOqmDCckcWi8FrgNWBc=; b=LyaWxPHZNOPF1bLABnow2Xv2BeuDp6wZlBWeEW+TcQ47CbN6+OHjyNA1dXqBzk9wzf mxwpK4pTrUveG8OIjsFzmiV4W1tq4nc/l40puTqvuxCXGVp/dumW/Mv0/b3Zpa9YEvQl 2WsbwkP3FsnocRWrfUrGCvW4OawK4a1gn1qhnsiPQpftFlEFmTsAFgYxXt6fBb/P2CP/ MSss4EfKsXDcXTlXnQuTYVHEJtctKKXvPgb8KOXd3PyfRo0N2M+M3lPKitseeZUBJtEA keIqebPrLab3MxHa9FkAh9bIN/HQf6G+N6GRq2gtr4bqiP1Hp9qKKQltPz3644CHarav Zqzg== X-Gm-Message-State: AOAM531lax/d0iSjZe3cbJ/ShvtgOg3U17bWHEttDdXijPcpwVr+ATGD /b8eb/Ip3EFNHfNpaovVetbjBcetqrsrhA== X-Received: by 2002:ac8:4807:: with SMTP id g7mr23343447qtq.54.1600651318665; Sun, 20 Sep 2020 18:21:58 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:12:cad3:ffff:feb3:bd59]) by smtp.gmail.com with ESMTPSA id l26sm7741951qki.62.2020.09.20.18.21.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Sep 2020 18:21:58 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Ingo Molnar , Josh Triplett , Lai Jiangshan , Marco Elver , Mathieu Desnoyers , neeraj.iitr10@gmail.com, "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt , "Uladzislau Rezki (Sony)" Subject: [RFC v5 1/5] rcu/tree: Make rcu_do_batch count how many callbacks were executed Date: Sun, 20 Sep 2020 21:21:48 -0400 Message-Id: <20200921012152.2831904-2-joel@joelfernandes.org> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog In-Reply-To: <20200921012152.2831904-1-joel@joelfernandes.org> References: <20200921012152.2831904-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, rcu_do_batch() depends on the unsegmented callback list's len field to know how many CBs are executed. This fields counts down from 0 as CBs are dequeued. It is possible that all CBs could not be run because of reaching limits in which case the remaining unexecuted callbacks are requeued in the CPU's segcblist. The number of callbacks that were not requeued are then the negative count (how many CBs were run) stored in the rcl->len which has been counting down on every dequeue. This negative count is then added to the per-cpu segmented callback list's to correct its count. Such a design works against future efforts to track the length of each segment of the segmented callback list. The reason is because rcu_segcblist_extract_done_cbs() will be populating the unsegmented callback list's length field (rcl->len) during extraction. Also, the design of counting down from 0 is confusing and error-prone IMHO. This commit therefore explicitly counts have many callbacks were executed in rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len field, without relying on the negativity of rcl->len. Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/rcu_segcblist.c | 2 +- kernel/rcu/rcu_segcblist.h | 1 + kernel/rcu/tree.c | 9 ++++----- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c index 2d2a6b6b9dfb..bb246d8c6ef1 100644 --- a/kernel/rcu/rcu_segcblist.c +++ b/kernel/rcu/rcu_segcblist.c @@ -95,7 +95,7 @@ static void rcu_segcblist_set_len(struct rcu_segcblist *rsclp, long v) * This increase is fully ordered with respect to the callers accesses * both before and after. */ -static void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v) +void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v) { #ifdef CONFIG_RCU_NOCB_CPU smp_mb__before_atomic(); /* Up to the caller! */ diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h index 5c293afc07b8..b90725f81d77 100644 --- a/kernel/rcu/rcu_segcblist.h +++ b/kernel/rcu/rcu_segcblist.h @@ -76,6 +76,7 @@ static inline bool rcu_segcblist_restempty(struct rcu_segcblist *rsclp, int seg) } void rcu_segcblist_inc_len(struct rcu_segcblist *rsclp); +void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v); void rcu_segcblist_init(struct rcu_segcblist *rsclp); void rcu_segcblist_disable(struct rcu_segcblist *rsclp); void rcu_segcblist_offload(struct rcu_segcblist *rsclp); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7623128d0020..50af465729f4 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2427,7 +2427,7 @@ static void rcu_do_batch(struct rcu_data *rdp) rcu_segcblist_is_offloaded(&rdp->cblist); struct rcu_head *rhp; struct rcu_cblist rcl = RCU_CBLIST_INITIALIZER(rcl); - long bl, count; + long bl, count = 0; long pending, tlimit = 0; /* If no callbacks are ready, just return. */ @@ -2472,6 +2472,7 @@ static void rcu_do_batch(struct rcu_data *rdp) for (; rhp; rhp = rcu_cblist_dequeue(&rcl)) { rcu_callback_t f; + count++; debug_rcu_head_unqueue(rhp); rcu_lock_acquire(&rcu_callback_map); @@ -2485,9 +2486,8 @@ static void rcu_do_batch(struct rcu_data *rdp) /* * Stop only if limit reached and CPU has something to do. - * Note: The rcl structure counts down from zero. */ - if (-rcl.len >= bl && !offloaded && + if (count >= bl && !offloaded && (need_resched() || (!is_idle_task(current) && !rcu_is_callbacks_kthread()))) break; @@ -2510,7 +2510,6 @@ static void rcu_do_batch(struct rcu_data *rdp) local_irq_save(flags); rcu_nocb_lock(rdp); - count = -rcl.len; rdp->n_cbs_invoked += count; trace_rcu_batch_end(rcu_state.name, count, !!rcl.head, need_resched(), is_idle_task(current), rcu_is_callbacks_kthread()); @@ -2518,7 +2517,7 @@ static void rcu_do_batch(struct rcu_data *rdp) /* Update counts and requeue any remaining callbacks. */ rcu_segcblist_insert_done_cbs(&rdp->cblist, &rcl); smp_mb(); /* List handling before counting for rcu_barrier(). */ - rcu_segcblist_insert_count(&rdp->cblist, &rcl); + rcu_segcblist_add_len(&rdp->cblist, -count); /* Reinstate batch limit if we have worked down the excess. */ count = rcu_segcblist_n_cbs(&rdp->cblist); -- 2.28.0.681.g6f77f65b4e-goog