Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp464174img; Wed, 20 Mar 2019 04:31:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqyk4Ke5x4AiBGOYnzA8SXRO8fVbSEGqMuknEJlpv+OZHUDu4ftlte0uF/DEMsmL4C7QX0C1 X-Received: by 2002:a63:c84c:: with SMTP id l12mr6926803pgi.287.1553081484974; Wed, 20 Mar 2019 04:31:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553081484; cv=none; d=google.com; s=arc-20160816; b=DGmdZoginizSBXKILdpfB9XowRU/+NP8P+QiiK7CDTi/XU7Mv9WfjQefpHsldkjbC4 kXxOUWEqWvRlPt0jlWMk3uumkoo0wifrQjhDrit0mcNYLcGvG0KGE5CKG1rYm+14nL0F Od48SJqb7y2mcjGU+OM+PrKeSr6xtCyx9of9QLzpN61aWzv7PGrvCU4+QL+VxuCKMXWG o9ll7ziovIYDp6VniXYeNvNJul/E/eq8F7xhVC1FWJ/bB260oo4gxj0KFhcptpRquAOy 2uEPyDXoNQ7BVQ+VJ4Gu5DrRuZ9XwB/F0BKcqIIynJ+0mgouJBzsecEbjzFzQ2Hl/Urd pE1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=roUDmH22REEz4P9CiYWT4u+RyHfaY01O8OgI5Z1tK74=; b=IYK5JkkZx5S9ZI/zY5Vdktp5CPVMjICGYaXF75hcF0+Jxzdydqcq68qalrhE/R/9TA uYBPYNRf4lvEU3hZz7kLQuP3EYuT644xPMOzgHf+665Ea75T4hdvFGXCtP2cTpBSyLvR qU/gIYVmi+XhYdLWpUkQY5C7+9Vw2JG2dFFSEwK895SAgydV1X/qVfbkHBm/aYZMjw7O uxG+FHpEXkspJsUamayjJBDlX8tBG8fvU4V/EKFbTYo9lKMzSmW1czayC+HBrHF68by+ ZZ1wfTjHuYUNV5tXchI73aHjlPxVALQXpusC2VjhKxot/AMcMmMwg1WoA8WmkUSENLil GWZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k74si1422840pfb.32.2019.03.20.04.31.10; Wed, 20 Mar 2019 04:31:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727937AbfCTL2u (ORCPT + 99 others); Wed, 20 Mar 2019 07:28:50 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:35304 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726020AbfCTL2t (ORCPT ); Wed, 20 Mar 2019 07:28:49 -0400 Received: from bigeasy by Galois.linutronix.de with local (Exim 4.80) (envelope-from ) id 1h6ZOd-00028d-Un; Wed, 20 Mar 2019 12:28:36 +0100 Date: Wed, 20 Mar 2019 12:28:35 +0100 From: Sebastian Andrzej Siewior To: Joel Fernandes Cc: linux-kernel@vger.kernel.org, Josh Triplett , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , tglx@linutronix.de, "Paul E. McKenney" , Mike Galbraith , rcu@vger.kernel.org Subject: Re: [PATCH] rcu: Allow to eliminate softirq processing from rcutree Message-ID: <20190320112835.prq22vsto3ecckff@linutronix.de> References: <20190315111130.4902-1-bigeasy@linutronix.de> <20190320002613.GA129907@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20190320002613.GA129907@google.com> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-03-19 20:26:13 [-0400], Joel Fernandes wrote: > > @@ -2769,19 +2782,121 @@ static void invoke_rcu_callbacks(struct rcu_data *rdp) > > { > > if (unlikely(!READ_ONCE(rcu_scheduler_fully_active))) > > return; > > - if (likely(!rcu_state.boost)) { > > - rcu_do_batch(rdp); > > - return; > > - } > > - invoke_rcu_callbacks_kthread(); > > + rcu_do_batch(rdp); > > Looks like a nice change, but one question... > > Consider the case where rcunosoftirq boot option is not passed. > > Before, if RCU_BOOST=y, then callbacks would be invoked in rcuc threads if > possible, by those threads being woken up from within the softirq context > (in invoke_rcu_callbacks). > > Now, if RCU_BOOST=y, then callbacks would only be invoked in softirq context > and not in the threads at all. Because rcu_softirq_enabled = false, so the > path executes: > rcu_read_unlock_special() -> > raise_softirq_irqsoff() -> > rcu_process_callbacks_si() -> > rcu_process_callbacks() -> > invoke_rcu_callbacks() -> > rcu_do_batch() > > This seems like a behavioral change to me. This makes the callbacks always > execute from the softirq context and not the threads when boosting is > configured. IMO in the very least, such behavioral change should be > documented in the change. > > One way to fix this I think could be, if boosting is enabled, then set > rcu_softirq_enabled to false by default so the callbacks are still executed > in the rcuc threads. > > Did I miss something? Sorry if I did, thanks! So with all the swaps and reorder we talking about this change: diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 0a719f726e149..82810483bfc6c 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2306,20 +2306,6 @@ static void rcu_core_si(struct softirq_action *h) rcu_core(); } -/* - * Schedule RCU callback invocation. If the running implementation of RCU - * does not support RCU priority boosting, just do a direct call, otherwise - * wake up the per-CPU kernel kthread. Note that because we are running - * on the current CPU with softirqs disabled, the rcu_cpu_kthread_task - * cannot disappear out from under us. - */ -static void invoke_rcu_callbacks(struct rcu_data *rdp) -{ - if (unlikely(!READ_ONCE(rcu_scheduler_fully_active))) - return; - rcu_do_batch(rdp); -} - static void rcu_wake_cond(struct task_struct *t, int status) { /* @@ -2330,6 +2316,19 @@ static void rcu_wake_cond(struct task_struct *t, int status) wake_up_process(t); } +static void invoke_rcu_core_kthread(void) +{ + struct task_struct *t; + unsigned long flags; + + local_irq_save(flags); + __this_cpu_write(rcu_data.rcu_cpu_has_work, 1); + t = __this_cpu_read(rcu_data.rcu_cpu_kthread_task); + if (t != NULL && t != current) + rcu_wake_cond(t, __this_cpu_read(rcu_data.rcu_cpu_kthread_status)); + local_irq_restore(flags); +} + static bool rcu_softirq_enabled = true; static int __init rcunosoftirq_setup(char *str) @@ -2339,26 +2338,33 @@ static int __init rcunosoftirq_setup(char *str) } __setup("rcunosoftirq", rcunosoftirq_setup); +/* + * Schedule RCU callback invocation. If the running implementation of RCU + * does not support RCU priority boosting, just do a direct call, otherwise + * wake up the per-CPU kernel kthread. Note that because we are running + * on the current CPU with softirqs disabled, the rcu_cpu_kthread_task + * cannot disappear out from under us. + */ +static void invoke_rcu_callbacks(struct rcu_data *rdp) +{ + if (unlikely(!READ_ONCE(rcu_scheduler_fully_active))) + return; + if (rcu_state.boost || rcu_softirq_enabled) + invoke_rcu_core_kthread(); + rcu_do_batch(rdp); +} + /* * Wake up this CPU's rcuc kthread to do RCU core processing. */ static void invoke_rcu_core(void) { - unsigned long flags; - struct task_struct *t; - if (!cpu_online(smp_processor_id())) return; - if (rcu_softirq_enabled) { + if (rcu_softirq_enabled) raise_softirq(RCU_SOFTIRQ); - } else { - local_irq_save(flags); - __this_cpu_write(rcu_data.rcu_cpu_has_work, 1); - t = __this_cpu_read(rcu_data.rcu_cpu_kthread_task); - if (t != NULL && t != current) - rcu_wake_cond(t, __this_cpu_read(rcu_data.rcu_cpu_kthread_status)); - local_irq_restore(flags); - } + else + invoke_rcu_core_kthread(); } static void rcu_cpu_kthread_park(unsigned int cpu) @@ -2426,7 +2432,8 @@ static int __init rcu_spawn_core_kthreads(void) per_cpu(rcu_data.rcu_cpu_has_work, cpu) = 0; if (!IS_ENABLED(CONFIG_RCU_BOOST) && !rcu_softirq_enabled) return 0; - WARN_ONCE(smpboot_register_percpu_thread(&rcu_cpu_thread_spec), "%s: Could not start rcub kthread, OOM is now expected behavior\n", __func__); + WARN_ONCE(smpboot_register_percpu_thread(&rcu_cpu_thread_spec), + "%s: Could not start rcuc kthread, OOM is now expected behavior\n", __func__); return 0; } early_initcall(rcu_spawn_core_kthreads); -- 2.20.1 > - Joel Sebastian