Received: by 2002:a05:7412:40d:b0:e2:908c:2ebd with SMTP id 13csp161247rdf; Mon, 20 Nov 2023 21:18:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IGIRnB+aPqr6WfY8pUwacrPU7HZNKq9lASsIPDIB6wKs2F4nmZB4KuoP7Uw0Q5+CPs4hZxY X-Received: by 2002:a05:6a20:8622:b0:18a:da5a:50ab with SMTP id l34-20020a056a20862200b0018ada5a50abmr1413308pze.2.1700543893967; Mon, 20 Nov 2023 21:18:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700543893; cv=none; d=google.com; s=arc-20160816; b=fg26dOkGeWV45Xn8A/3LjpngpGc+KFmwuFsF+j2o48+S0KpQqgM0L23dzMRo36yBeL 3A+b55luPJS4BDumwyykpfuGj4/GHYRMt/gIl4EV6rH9kujEJd8bqUXmWjNwHrdPrznl JW+KEwiC60ibyvdY8Q7o6pJ/RBLrHwhF70f5mUH+6PjzV13LuX62XE8wYAvn31eqqhMV h4g0TyKV3xzmZvO7P9ItmuKTCtF8VYNxeUo2DESAi6R9TqAguWrn50RFv9D8VyXdmkZF LW8EBe4tdPPxK49qap4vF9610BY1TIjuj5FnxuhSzl1V22jxSbgafzXWO/AIP8GHMD8y /chg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=53Ct0oO4bJwk1nxDaVKR4vGCM66NN6Cfass4IHYdr/k=; fh=z9W4oVdUQnH5bSFRNqkjhmWrzFDbzja8FeSnD4u1LKM=; b=BvFiC+iolgEUCaP0+RxBhw5Gc/D4mR5RKYSsTeWf7F6ycqKoTJF8KXC/XSp/YbWcBm aXkfbitQ2bwlyQKkJaw6NDhEjtj946ofglU9g1y6QAH3rWC6Nrl4Faq7wYP9qzslrOqC q4zX7tclaN+3Xy/Zb09ucZcgR2AM1TmtKtc7kyKDUcGCGTog/nGQPEOfCm0q1MBzqiZQ ppOIRcmJCeO8DZAlQPu1bSpUPbZUX6ktKQVxWsABmU3yPhDgw7M1s4RU19clqu7CUIPt wQ/vpy0c38GAkWoAwMMG1IzEQkZeAAmomH22Z1Cjem+98Oj5QPI5R2rkT2CaeyVsVIUC KbJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="VC2DJgs/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id f22-20020a635556000000b00578cc8d2599si9024467pgm.211.2023.11.20.21.18.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Nov 2023 21:18:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="VC2DJgs/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id B812D801E857; Mon, 20 Nov 2023 21:18:06 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbjKUFSE (ORCPT + 99 others); Tue, 21 Nov 2023 00:18:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230247AbjKUFSC (ORCPT ); Tue, 21 Nov 2023 00:18:02 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB0CEE8 for ; Mon, 20 Nov 2023 21:17:58 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42B68C433C7; Tue, 21 Nov 2023 05:17:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700543878; bh=XSjtnpgs4ER0L2EMMeyKjJWnnwvSFjP49+AiOl+O1Hc=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=VC2DJgs/gpMw1kWRXuYErJOGb1SeSJlBS2ViARXP4+55U5CFd4gftsrblYiu4xzIE OUJvsppy+KVYL9SY7RuhNjdyidyfG+uNafmZKKPKoftZ1OOWf5DLqzOSCLpjWMj0xO yb2Xe4r1SZUFSOoxZO8WyjJhxLpk/WE4h2NqkWkbpTX7tFLoQd9nZ4d83DFjp6OeoC jjaCLhkqoQUdpGqTWthQm1bvPJxHbgH9sR24RLurs3qfEKOaXM99HBvvp022O6A2PC jWEhVzpTJVd4LKE6Zox0Bn4bBf/ad1wKw5+TRlBIG0jVFf3daOs+sSMSpkZfTqcTgK Y42qJj9XIY3Hw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D0497CE1390; Mon, 20 Nov 2023 21:17:57 -0800 (PST) Date: Mon, 20 Nov 2023 21:17:57 -0800 From: "Paul E. McKenney" To: Ankur Arora Cc: linux-kernel@vger.kernel.org, tglx@linutronix.de, peterz@infradead.org, torvalds@linux-foundation.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, luto@kernel.org, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de, jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, jgross@suse.com, andrew.cooper3@citrix.com, mingo@kernel.org, bristot@kernel.org, mathieu.desnoyers@efficios.com, geert@linux-m68k.org, glaubitz@physik.fu-berlin.de, anton.ivanov@cambridgegreys.com, mattst88@gmail.com, krypton@ulrich-teichert.org, rostedt@goodmis.org, David.Laight@aculab.com, richard@nod.at, mjguzik@gmail.com Subject: Re: [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n Message-ID: <46a4c47a-ba1c-4776-a6f8-6c2146cbdd0d@paulmck-laptop> Reply-To: paulmck@kernel.org References: <20231107215742.363031-1-ankur.a.arora@oracle.com> <20231107215742.363031-49-ankur.a.arora@oracle.com> <2027da00-273d-41cf-b9e7-460776181083@paulmck-laptop> <87lear4wj6.fsf@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87lear4wj6.fsf@oracle.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 20 Nov 2023 21:18:07 -0800 (PST) On Mon, Nov 20, 2023 at 07:26:05PM -0800, Ankur Arora wrote: > > Paul E. McKenney writes: > > On Tue, Nov 07, 2023 at 01:57:34PM -0800, Ankur Arora wrote: > >> cond_resched() is used to provide urgent quiescent states for > >> read-side critical sections on PREEMPT_RCU=n configurations. > >> This was necessary because lacking preempt_count, there was no > >> way for the tick handler to know if we were executing in RCU > >> read-side critical section or not. > >> > >> An always-on CONFIG_PREEMPT_COUNT, however, allows the tick to > >> reliably report quiescent states. > >> > >> Accordingly, evaluate preempt_count() based quiescence in > >> rcu_flavor_sched_clock_irq(). > >> > >> Suggested-by: Paul E. McKenney > >> Signed-off-by: Ankur Arora > >> --- > >> kernel/rcu/tree_plugin.h | 3 ++- > >> kernel/sched/core.c | 15 +-------------- > >> 2 files changed, 3 insertions(+), 15 deletions(-) > >> > >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > >> index f87191e008ff..618f055f8028 100644 > >> --- a/kernel/rcu/tree_plugin.h > >> +++ b/kernel/rcu/tree_plugin.h > >> @@ -963,7 +963,8 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp) > >> */ > >> static void rcu_flavor_sched_clock_irq(int user) > >> { > >> - if (user || rcu_is_cpu_rrupt_from_idle()) { > >> + if (user || rcu_is_cpu_rrupt_from_idle() || > >> + !(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) { > > > > This looks good. > > > >> /* > >> * Get here if this CPU took its interrupt from user > >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c > >> index bf5df2b866df..15db5fb7acc7 100644 > >> --- a/kernel/sched/core.c > >> +++ b/kernel/sched/core.c > >> @@ -8588,20 +8588,7 @@ int __sched _cond_resched(void) > >> preempt_schedule_common(); > >> return 1; > >> } > >> - /* > >> - * In preemptible kernels, ->rcu_read_lock_nesting tells the tick > >> - * whether the current CPU is in an RCU read-side critical section, > >> - * so the tick can report quiescent states even for CPUs looping > >> - * in kernel context. In contrast, in non-preemptible kernels, > >> - * RCU readers leave no in-memory hints, which means that CPU-bound > >> - * processes executing in kernel context might never report an > >> - * RCU quiescent state. Therefore, the following code causes > >> - * cond_resched() to report a quiescent state, but only when RCU > >> - * is in urgent need of one. > >> - * / > >> -#ifndef CONFIG_PREEMPT_RCU > >> - rcu_all_qs(); > >> -#endif > > > > But... > > > > Suppose we have a long-running loop in the kernel that regularly > > enables preemption, but only momentarily. Then the added > > rcu_flavor_sched_clock_irq() check would almost always fail, making > > for extremely long grace periods. > > So, my thinking was that if RCU wants to end a grace period, it would > force a context switch by setting TIF_NEED_RESCHED (and as patch 38 mentions > RCU always uses the the eager version) causing __schedule() to call > rcu_note_context_switch(). > That's similar to the preempt_schedule_common() case in the > _cond_resched() above. But that requires IPIing that CPU, correct? > But if I see your point, RCU might just want to register a quiescent > state and for this long-running loop rcu_flavor_sched_clock_irq() does > seem to fall down. > > > Or did I miss a change that causes preempt_enable() to help RCU out? > > Something like this? > > diff --git a/include/linux/preempt.h b/include/linux/preempt.h > index dc5125b9c36b..e50f358f1548 100644 > --- a/include/linux/preempt.h > +++ b/include/linux/preempt.h > @@ -222,6 +222,8 @@ do { \ > barrier(); \ > if (unlikely(preempt_count_dec_and_test())) \ > __preempt_schedule(); \ > + if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) \ > + rcu_all_qs(); \ > } while (0) Or maybe something like this to lighten the load a bit: #define preempt_enable() \ do { \ barrier(); \ if (unlikely(preempt_count_dec_and_test())) { \ __preempt_schedule(); \ if (raw_cpu_read(rcu_data.rcu_urgent_qs) && \ !(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) \ rcu_all_qs(); \ } \ } while (0) And at that point, we should be able to drop the PREEMPT_MASK, not that it makes any difference that I am aware of: #define preempt_enable() \ do { \ barrier(); \ if (unlikely(preempt_count_dec_and_test())) { \ __preempt_schedule(); \ if (raw_cpu_read(rcu_data.rcu_urgent_qs) && \ !(preempt_count() & SOFTIRQ_MASK)) \ rcu_all_qs(); \ } \ } while (0) Except that we can migrate as soon as that preempt_count_dec_and_test() returns. And that rcu_all_qs() disables and re-enables preemption, which will result in undesired recursion. Sigh. So maybe something like this: #define preempt_enable() \ do { \ if (raw_cpu_read(rcu_data.rcu_urgent_qs) && \ !(preempt_count() & SOFTIRQ_MASK)) \ rcu_all_qs(); \ barrier(); \ if (unlikely(preempt_count_dec_and_test())) { \ __preempt_schedule(); \ } \ } while (0) Then rcu_all_qs() becomes something like this: void rcu_all_qs(void) { unsigned long flags; /* Load rcu_urgent_qs before other flags. */ if (!smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) return; this_cpu_write(rcu_data.rcu_urgent_qs, false); if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs))) { local_irq_save(flags); rcu_momentary_dyntick_idle(); local_irq_restore(flags); } rcu_qs(); } EXPORT_SYMBOL_GPL(rcu_all_qs); > Though I do wonder about the likelihood of hitting the case you describe > and maybe instead of adding the check on every preempt_enable() > it might be better to instead force a context switch in the > rcu_flavor_sched_clock_irq() (as we do in the PREEMPT_RCU=y case.) Maybe. But rcu_all_qs() is way lighter weight than a context switch. Thanx, Paul