Received: by 2002:a05:7412:8598:b0:f9:33c2:5753 with SMTP id n24csp577742rdh; Tue, 19 Dec 2023 07:30:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IG04bIcShgqTaqAbFTeQ/CeldmKIxk0L7TpAFuNnotOmoidt7cNJkyvM4/h9cr9RovZeUF5 X-Received: by 2002:a05:6a20:8f14:b0:18f:f955:1ec8 with SMTP id b20-20020a056a208f1400b0018ff9551ec8mr21424563pzk.51.1702999824930; Tue, 19 Dec 2023 07:30:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702999824; cv=none; d=google.com; s=arc-20160816; b=OtO/Z/q5c9Vu7XUlPY0cVAFrEHKmqNUG0O0ue4wDA+oOMl0tc9mK1d0b79yR8hxAKL TcZONVwFb9Lh3FsS4T0hakNy+qA266B4wyspkRzOW7q31oox+SmA6kkc/DMJmbsi6vwA PcbxXxvmUUm8pnRdhkNXjOwfhznmdwfRuUjfluXSkz978P5On1gus5wi7JoZiOojd8Zi dLY7ZNz8sCDi6IGqWv3YwEMg4iO6cPD/R2Th4RVkltQNhhTQZsaR0b/eEFBLJoLf/k1f FgLaWzihJfwICYaesoEHF6sCkmsKBk3CiGaYuKtW8QZZpMytpBL+4qukejB/3+1I6wBU Bzug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=SzjgOaj2nFqLcBYflyq+X/c8Ew3bnXwTzt07cJkK+a8=; fh=BCHpQm00MNUHHRGwtvv3FlsX/ceID9joZYcAC0YOthk=; b=sClURwBBKnUAecC8roFegG3/sbnJVVvVOvYAbae2muXNeSJiiMBd0+hcecyGwp18b1 g5mYFIEj1B+gNuh0Bit+wKCnslpfapNzCTY/40K7G/caCLkP8RYbuB1ndf+O9ylAX8Tg I0jO5pziYEzRuoU9YdQF+VW+LuBi1is4w5gd/hGBWKf8J1mXCKWvn2dckpzU83wpIzJA 594+vEZlbZvP/lHF9corqwaT/rg4l47wHPMNpVkPeD5nM+JaFHUE0YAFtCqN6lwcmg+P vxsZHCyA1A7e4BhfqolFDwOZnnxScVsgZcHw5kOWoaPFRTR+m9ZHA6VR4amHLXm9SHR1 +s7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Gp5ey1fg; spf=pass (google.com: domain of linux-kernel+bounces-5487-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5487-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id bs64-20020a632843000000b00563de199314si20090388pgb.896.2023.12.19.07.30.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 07:30:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5487-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Gp5ey1fg; spf=pass (google.com: domain of linux-kernel+bounces-5487-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5487-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8787CB20EFC for ; Tue, 19 Dec 2023 15:29:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F211B1CA8D; Tue, 19 Dec 2023 15:29:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Gp5ey1fg" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D93A1CA82; Tue, 19 Dec 2023 15:29:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A63DFC433D9; Tue, 19 Dec 2023 15:29:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702999763; bh=q7a9zPyS7FnollgtLmCVbmrzHtXm3HyWLIGbUs5rFFI=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=Gp5ey1fgf6OunXBrFexRX0WH52xrn/Dzy2cYwwqlY2pTtrYCEYUoGC1pOYG1Ak8Oq Nb+zAU/uYHAAaJunG8lZXQhI814Y9L/ai5Q8kFqH2X7oT6E05i6RqvmccGuSoGE26D a2m9EdU2G30vwsI60B8+bzmnhWRU8JOgGEj70C2NJn8wcXfPvhCgejdMjVtlDBFaJ1 YToBZVtyZjCKkS9gLWFSEjbJatQ5yMIvVYzIEsfdfWDKL8cHhyXCuXDZygTj5FBr/7 IgAAb82cNHQIuSO87/qQhJKV3CA7+1mcLorr5isLEs1v7S3ELrRBxD+Ko0JSmEmCkj ogmBXOQ0luAFg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 3C796CE0B37; Tue, 19 Dec 2023 07:29:23 -0800 (PST) Date: Tue, 19 Dec 2023 07:29:23 -0800 From: "Paul E. McKenney" To: Frederic Weisbecker Cc: LKML , Boqun Feng , Joel Fernandes , Neeraj Upadhyay , Uladzislau Rezki , Zqiang , rcu , Thomas Gleixner , Peter Zijlstra Subject: Re: [PATCH 2/3] rcu: Defer RCU kthreads wakeup when CPU is dying Message-ID: Reply-To: paulmck@kernel.org References: <20231218231916.11719-1-frederic@kernel.org> <20231218231916.11719-3-frederic@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231218231916.11719-3-frederic@kernel.org> On Tue, Dec 19, 2023 at 12:19:15AM +0100, Frederic Weisbecker wrote: > When the CPU goes idle for the last time during the CPU down hotplug > process, RCU reports a final quiescent state for the current CPU. If > this quiescent state propagates up to the top, some tasks may then be > woken up to complete the grace period: the main grace period kthread > and/or the expedited main workqueue (or kworker). > > If those kthreads have a SCHED_FIFO policy, the wake up can indirectly > arm the RT bandwith timer to the local offline CPU. Since this happens > after hrtimers have been migrated at CPUHP_AP_HRTIMERS_DYING stage, the > timer gets ignored. Therefore if the RCU kthreads are waiting for RT > bandwidth to be available, they may never be actually scheduled. > > This triggers TREE03 rcutorture hangs: > > rcu: INFO: rcu_preempt self-detected stall on CPU > rcu: 4-...!: (1 GPs behind) idle=9874/1/0x4000000000000000 softirq=0/0 fqs=20 rcuc=21071 jiffies(starved) > rcu: (t=21035 jiffies g=938281 q=40787 ncpus=6) > rcu: rcu_preempt kthread starved for 20964 jiffies! g938281 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 > rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior. > rcu: RCU grace-period kthread stack dump: > task:rcu_preempt state:R running task stack:14896 pid:14 tgid:14 ppid:2 flags:0x00004000 > Call Trace: > > __schedule+0x2eb/0xa80 > schedule+0x1f/0x90 > schedule_timeout+0x163/0x270 > ? __pfx_process_timeout+0x10/0x10 > rcu_gp_fqs_loop+0x37c/0x5b0 > ? __pfx_rcu_gp_kthread+0x10/0x10 > rcu_gp_kthread+0x17c/0x200 > kthread+0xde/0x110 > ? __pfx_kthread+0x10/0x10 > ret_from_fork+0x2b/0x40 > ? __pfx_kthread+0x10/0x10 > ret_from_fork_asm+0x1b/0x30 > > > The situation can't be solved with just unpinning the timer. The hrtimer > infrastructure and the nohz heuristics involved in finding the best > remote target for an unpinned timer would then also need to handle > enqueues from an offline CPU in the most horrendous way. > > So fix this on the RCU side instead and defer the wake up to an online > CPU if it's too late for the local one. One question below... > Reported-by: Paul E. McKenney > Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU earlier") > Signed-off-by: Frederic Weisbecker > --- > kernel/rcu/tree.c | 34 +++++++++++++++++++++++++++++++++- > kernel/rcu/tree_exp.h | 3 +-- > 2 files changed, 34 insertions(+), 3 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 3ac3c846105f..157f3ca2a9b5 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -1013,6 +1013,38 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp) > return needmore; > } > > +static void swake_up_one_online_ipi(void *arg) > +{ > + struct swait_queue_head *wqh = arg; > + > + swake_up_one(wqh); > +} > + > +static void swake_up_one_online(struct swait_queue_head *wqh) > +{ > + int cpu = get_cpu(); This works because get_cpu() is currently preempt_disable(). If there are plans to make get_cpu() be some sort of read lock, we might deadlock when synchronize_rcu() is invoked from a CPU-hotplug notifier, correct? Might this be worth a comment somewhere at some point? Thanx, Paul > + > + /* > + * If called from rcutree_report_cpu_starting(), wake up > + * is dangerous that late in the CPU-down hotplug process. The > + * scheduler might queue an ignored hrtimer. Defer the wake up > + * to an online CPU instead. > + */ > + if (unlikely(cpu_is_offline(cpu))) { > + int target; > + > + target = cpumask_any_and(housekeeping_cpumask(HK_TYPE_RCU), > + cpu_online_mask); > + > + smp_call_function_single(target, swake_up_one_online_ipi, > + wqh, 0); > + put_cpu(); > + } else { > + put_cpu(); > + swake_up_one(wqh); > + } > +} > + > /* > * Awaken the grace-period kthread. Don't do a self-awaken (unless in an > * interrupt or softirq handler, in which case we just might immediately > @@ -1037,7 +1069,7 @@ static void rcu_gp_kthread_wake(void) > return; > WRITE_ONCE(rcu_state.gp_wake_time, jiffies); > WRITE_ONCE(rcu_state.gp_wake_seq, READ_ONCE(rcu_state.gp_seq)); > - swake_up_one(&rcu_state.gp_wq); > + swake_up_one_online(&rcu_state.gp_wq); > } > > /* > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > index 6d7cea5d591f..2ac440bc7e10 100644 > --- a/kernel/rcu/tree_exp.h > +++ b/kernel/rcu/tree_exp.h > @@ -173,7 +173,6 @@ static bool sync_rcu_exp_done_unlocked(struct rcu_node *rnp) > return ret; > } > > - > /* > * Report the exit from RCU read-side critical section for the last task > * that queued itself during or before the current expedited preemptible-RCU > @@ -201,7 +200,7 @@ static void __rcu_report_exp_rnp(struct rcu_node *rnp, > raw_spin_unlock_irqrestore_rcu_node(rnp, flags); > if (wake) { > smp_mb(); /* EGP done before wake_up(). */ > - swake_up_one(&rcu_state.expedited_wq); > + swake_up_one_online(&rcu_state.expedited_wq); > } > break; > } > -- > 2.42.1 >