Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp5368688pxu; Wed, 21 Oct 2020 23:40:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx1W3pJsgU9vG529bS1tzHf1czvFL4ttsERXmMdHeBHW7nCx02Tq97csuKspeS6abxErA4t X-Received: by 2002:a17:906:494:: with SMTP id f20mr841968eja.285.1603348811569; Wed, 21 Oct 2020 23:40:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603348811; cv=none; d=google.com; s=arc-20160816; b=erCwYzROHrQSM/9NhQq+lSyTFbJ1V2DUOUdtQZCyiYbwURhM6+m0+RJFIxiPD3Uws0 QWjubsxcYEjfQv1OaWoseZQ1upkphENFgZeAzHFvljo+/BROb+1mJTtvyDWok4CaxCsO NLqVoi4b2BypdBghGq68EDPX4VzkYc73gQ2jzy3sLQcSAdilamnyfQ5RzV6KDvtLS4/V 1poQcauDs6p6JP1T5CsWeEUQ70mcwyOiNw/tmyPKyZ0Z1HNCz89rmUFyHt+fRoInFyCU MY6xOfOkOJl5cy5OGk995Jsj+pKME4r6vk6Ee92RLteaot5R+Ji3ScxJYPDrjBNebw03 oSvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ww8CsuPDqbQ49u+7+Xglh9wcI8zaAZORASDD+ggL+Jo=; b=la97FeK+iVBJ3djWlwzHtuJcsmTWI+6Tex9vjTNazPcKoI2sr4nRaGZla3SAwlmiZA 2mXIMwh9KPd8XwlYP95UJFwymf7p5ZWSg3yq9yCwGAEi750TIFvFs+FCZusDDAQoC3Te 3T8QUPAIs4PJ4qlOuIPZpIoT8vu5xmjh0ZrrORJIs3ro8I0fJMLa0JpyWrc0Rq95ffqO G8FCGcPwWCeKfXM+H0L6J3QtM+rxWgJQj4GVkQ2IKUv5OG27xnqnge5OWip/zEZV3tcu E8TCTc4spPTyttleTS5/RZSmml3H9uih2AzSSSaYUhUsLqMeVomO0SZXKZNWWyQ6RAP9 NMvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=KHiGQqKQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r21si370483ejo.534.2020.10.21.23.39.49; Wed, 21 Oct 2020 23:40:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=KHiGQqKQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2504683AbgJUTIV (ORCPT + 99 others); Wed, 21 Oct 2020 15:08:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2504679AbgJUTIU (ORCPT ); Wed, 21 Oct 2020 15:08:20 -0400 Received: from mail-qk1-x743.google.com (mail-qk1-x743.google.com [IPv6:2607:f8b0:4864:20::743]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B4B9C0613CE for ; Wed, 21 Oct 2020 12:08:20 -0700 (PDT) Received: by mail-qk1-x743.google.com with SMTP id b69so3615677qkg.8 for ; Wed, 21 Oct 2020 12:08:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ww8CsuPDqbQ49u+7+Xglh9wcI8zaAZORASDD+ggL+Jo=; b=KHiGQqKQsOQ5MJbwpR/48w8Wxog1sk9opXkk/ws8TXU+eEwSJvV1IsvVFNd04BxASE eZCQQ0yGMFA3vJ01E/3QL11qdIv3SoR0h6hrzQNt/t5AHhPjxAgn4EsN9+PdZUWkOUaE +d3cSc+zQnjLYcc8M+8qIH3uI7rnmLEMZRLps= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ww8CsuPDqbQ49u+7+Xglh9wcI8zaAZORASDD+ggL+Jo=; b=KUuJolhht7bxhG7c2y4gNGThja1Fnp+deUQGVjJlVO4RTX0qutN3qoqOnKFf+omlTo zbjnyIXeajTmUR1ltxYjP5pGt4aBYnYS0sS89W13KZVz8CsXgief2OgzRGgqjyyZrgFU TRB2tzevFZQWYzTBAosD7ajyOC807ONILjqDXOZK9GG0re+0rYi5wOlYFZdI5rbntK3A 6OsGbTNR5nZ+OGbqxUlrDTBJmMMtGqQDak6DqTERarZifuirPZDfnrrPwwv6wCNSlCVB s/D07+mDzzL6Zu5flJxQJepPiNrfr7InY9RlxT0yCDXzUKIN31XInyvmflCmwC4/0ubE ZxsQ== X-Gm-Message-State: AOAM5307q4/lNLUmZzL3Y12uMqdl+E7/SdMAd/wq1YJ9hIgpSZU4H2FK RwBO3Pe3kGycYSIXnuk3Ezhl8xDhn4EQ0g== X-Received: by 2002:a05:620a:1250:: with SMTP id a16mr4522171qkl.411.1603307298701; Wed, 21 Oct 2020 12:08:18 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:411:cad3:ffff:feb3:bd59]) by smtp.gmail.com with ESMTPSA id g11sm1913982qkl.30.2020.10.21.12.08.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Oct 2020 12:08:18 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Frederic Weisbecker , Josh Triplett , Lai Jiangshan , Marco Elver , Mathieu Desnoyers , "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt , "Uladzislau Rezki (Sony)" , fweisbec@gmail.com, neeraj.iitr10@gmail.com Subject: [PATCH v8 1/6] rcu/tree: Make rcu_do_batch count how many callbacks were executed Date: Wed, 21 Oct 2020 15:08:08 -0400 Message-Id: <20201021190813.3005054-2-joel@joelfernandes.org> X-Mailer: git-send-email 2.29.0.rc1.297.gfa9743e501-goog In-Reply-To: <20201021190813.3005054-1-joel@joelfernandes.org> References: <20201021190813.3005054-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, rcu_do_batch() depends on the unsegmented callback list's len field to know how many CBs are executed. This fields counts down from 0 as CBs are dequeued. It is possible that all CBs could not be run because of reaching limits in which case the remaining unexecuted callbacks are requeued in the CPU's segcblist. The number of callbacks that were not requeued are then the negative count (how many CBs were run) stored in the rcl->len which has been counting down on every dequeue. This negative count is then added to the per-cpu segmented callback list's to correct its count. Such a design works against future efforts to track the length of each segment of the segmented callback list. The reason is because rcu_segcblist_extract_done_cbs() will be populating the unsegmented callback list's length field (rcl->len) during extraction. Also, the design of counting down from 0 is confusing and error-prone IMHO. This commit therefore explicitly counts how many callbacks were executed in rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len field, without relying on the negativity of rcl->len. Signed-off-by: Joel Fernandes (Google) Reviewed-by: Frederic Weisbecker --- kernel/rcu/rcu_segcblist.c | 2 +- kernel/rcu/rcu_segcblist.h | 1 + kernel/rcu/tree.c | 11 +++++------ 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c index 2d2a6b6b9dfb..bb246d8c6ef1 100644 --- a/kernel/rcu/rcu_segcblist.c +++ b/kernel/rcu/rcu_segcblist.c @@ -95,7 +95,7 @@ static void rcu_segcblist_set_len(struct rcu_segcblist *rsclp, long v) * This increase is fully ordered with respect to the callers accesses * both before and after. */ -static void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v) +void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v) { #ifdef CONFIG_RCU_NOCB_CPU smp_mb__before_atomic(); /* Up to the caller! */ diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h index 492262bcb591..1d2d61406463 100644 --- a/kernel/rcu/rcu_segcblist.h +++ b/kernel/rcu/rcu_segcblist.h @@ -76,6 +76,7 @@ static inline bool rcu_segcblist_restempty(struct rcu_segcblist *rsclp, int seg) } void rcu_segcblist_inc_len(struct rcu_segcblist *rsclp); +void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v); void rcu_segcblist_init(struct rcu_segcblist *rsclp); void rcu_segcblist_disable(struct rcu_segcblist *rsclp); void rcu_segcblist_offload(struct rcu_segcblist *rsclp); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 286dc0a1b184..24c00020ab83 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2429,7 +2429,7 @@ static void rcu_do_batch(struct rcu_data *rdp) const bool offloaded = rcu_segcblist_is_offloaded(&rdp->cblist); struct rcu_head *rhp; struct rcu_cblist rcl = RCU_CBLIST_INITIALIZER(rcl); - long bl, count; + long bl, count = 0; long pending, tlimit = 0; /* If no callbacks are ready, just return. */ @@ -2474,6 +2474,7 @@ static void rcu_do_batch(struct rcu_data *rdp) for (; rhp; rhp = rcu_cblist_dequeue(&rcl)) { rcu_callback_t f; + count++; debug_rcu_head_unqueue(rhp); rcu_lock_acquire(&rcu_callback_map); @@ -2487,15 +2488,14 @@ static void rcu_do_batch(struct rcu_data *rdp) /* * Stop only if limit reached and CPU has something to do. - * Note: The rcl structure counts down from zero. */ - if (-rcl.len >= bl && !offloaded && + if (count >= bl && !offloaded && (need_resched() || (!is_idle_task(current) && !rcu_is_callbacks_kthread()))) break; if (unlikely(tlimit)) { /* only call local_clock() every 32 callbacks */ - if (likely((-rcl.len & 31) || local_clock() < tlimit)) + if (likely((count & 31) || local_clock() < tlimit)) continue; /* Exceeded the time limit, so leave. */ break; @@ -2512,7 +2512,6 @@ static void rcu_do_batch(struct rcu_data *rdp) local_irq_save(flags); rcu_nocb_lock(rdp); - count = -rcl.len; rdp->n_cbs_invoked += count; trace_rcu_batch_end(rcu_state.name, count, !!rcl.head, need_resched(), is_idle_task(current), rcu_is_callbacks_kthread()); @@ -2520,7 +2519,7 @@ static void rcu_do_batch(struct rcu_data *rdp) /* Update counts and requeue any remaining callbacks. */ rcu_segcblist_insert_done_cbs(&rdp->cblist, &rcl); smp_mb(); /* List handling before counting for rcu_barrier(). */ - rcu_segcblist_insert_count(&rdp->cblist, &rcl); + rcu_segcblist_add_len(&rdp->cblist, -count); /* Reinstate batch limit if we have worked down the excess. */ count = rcu_segcblist_n_cbs(&rdp->cblist); -- 2.29.0.rc1.297.gfa9743e501-goog