Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1190448ybj; Tue, 5 May 2020 15:04:52 -0700 (PDT) X-Google-Smtp-Source: APiQypKsxyeIomc7jQQhknP+Ke+A2SkNq+zu5D11/bG9N2IrE2tUQRgmNzAewFcxkM8mNMRlid56 X-Received: by 2002:a05:6402:b82:: with SMTP id cf2mr4368385edb.49.1588716292107; Tue, 05 May 2020 15:04:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588716292; cv=none; d=google.com; s=arc-20160816; b=cjOEDJiMEoXD9n9ULEP/PxlRI8dnVRDlrdYAq7yUuREBZwSCJx3fHk2d//wvjcgTGg 9z4S4UZVGwZLcZlcIf6WPHNcrfNcMrZOaFmFvPfNH6muMlC8A9iCFvJcDjNHc3lcbm2G dUmnME+30zd4EKqjVaN3qFqY6IsO/pvQNRGSndl4EahXVOp2dx9LYPaJR4VB4Cqnk8xr N4KPdmbFz/up30U6tF6adQb/qmn13Cn5U1smp7BsRFM/QLWVc2O1yWCN7lHwZBd+Bxli MRYdeVygcuLJ4GTHX9KwqG58flmTwMX8RkJP/MdUaLJMovgoV7PLoeVGJ1Xfl/8Qe2ic 6BQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=0dN1OY1K2KrZ8JX35PwE7gqr9GbRcKixEoNiJLLT4pw=; b=XKVa1J5gIoxQ/A1PB2I1uekR3vNisDIdTw64VmX4WykfMD1TRMwXKCtBh5IWVOZwee LOorwS63vyXfEobqht8l+GQJwQmVKRM45RfuMN+ScRiVLHI78+Bkv+jz/5bXp/TXnLqV xW9DPQmPBBaq5r2KjxwJUwRB7yaBQUKqu08nTAvQvBap2PuRY803efdFgp1fJOE0rE3w tny8EfhQ265UJakRaBeyfuFuhdP7wyQ9LfMKQEVdhZYtbC5KRIgpFgxc9nTSOQ54zrEa aV3y48YOVP94/QWl40Yl507rOKk/7Y27AhLtGyFOWA3BkAp5ga3pDd8YE62V3TG/p/Ek 5fAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="aOlZTH/9"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v2si1950273edx.112.2020.05.05.15.04.28; Tue, 05 May 2020 15:04:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="aOlZTH/9"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728737AbgEEWC0 (ORCPT + 99 others); Tue, 5 May 2020 18:02:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:40160 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727089AbgEEWCZ (ORCPT ); Tue, 5 May 2020 18:02:25 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-105-78.bvtn.or.frontiernet.net [50.39.105.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0D47C206B8; Tue, 5 May 2020 22:02:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588716145; bh=jqBGVHODT2ZorzgQ2/+DwPIqnnfhekndrsHXricHH9I=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=aOlZTH/98A1zwce46k+/fi1I4wNQxzGSYOhtSyYx1Jn1p/waNXqZ1ZIe7+xkMMT6t NQEF/EKpJHDMpphEhy1gWAslQEKmVQOCLJX67D971XmXA1AM+dcOqU0xVe9CKoVioT dAPV3UcaQpoOEimGY6Tbd7vtyPYa+65B8UIkuZhk= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id DC1243523039; Tue, 5 May 2020 15:02:24 -0700 (PDT) Date: Tue, 5 May 2020 15:02:24 -0700 From: "Paul E. McKenney" To: Thomas Gleixner Cc: LKML , x86@kernel.org, Andy Lutomirski , Alexandre Chartre , Frederic Weisbecker , Paolo Bonzini , Sean Christopherson , Masami Hiramatsu , Petr Mladek , Steven Rostedt , Joel Fernandes , Boris Ostrovsky , Juergen Gross , Brian Gerst , Mathieu Desnoyers , Josh Poimboeuf , Will Deacon Subject: Re: [patch V4 part 3 11/29] rcu: Provide rcu_irq_exit_preempt() Message-ID: <20200505220224.GT2869@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20200505134354.774943181@linutronix.de> <20200505134904.364456424@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200505134904.364456424@linutronix.de> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 05, 2020 at 03:44:05PM +0200, Thomas Gleixner wrote: > Interrupts and exceptions invoke rcu_irq_enter() on entry and need to > invoke rcu_irq_exit() before they either return to the interrupted code or > invoke the scheduler due to preemption. > > The general assumption is that RCU idle code has to have preemption > disabled so that a return from interrupt cannot schedule. So the return > from interrupt code invokes rcu_irq_exit() and preempt_schedule_irq(). > > If there is any imbalance in the rcu_irq/nmi* invocations or RCU idle code > had preemption enabled then this goes unnoticed until the CPU goes idle or > some other RCU check is executed. > > Provide rcu_irq_exit_preempt() which can be invoked from the > interrupt/exception return code in case that preemption is enabled. It > invokes rcu_irq_exit() and contains a few sanity checks in case that > CONFIG_PROVE_RCU is enabled to catch such issues directly. > > Signed-off-by: Thomas Gleixner > Cc: "Paul E. McKenney" > Cc: Joel Fernandes The ->dynticks_nmi_nesting field is going away at some point, but there is always "git merge". ;-) Reviewed-by: Paul E. McKenney > --- > include/linux/rcutiny.h | 1 + > include/linux/rcutree.h | 1 + > kernel/rcu/tree.c | 21 +++++++++++++++++++++ > 3 files changed, 23 insertions(+) > > --- a/include/linux/rcutiny.h > +++ b/include/linux/rcutiny.h > @@ -71,6 +71,7 @@ static inline void rcu_irq_enter(void) { > static inline void rcu_irq_exit_irqson(void) { } > static inline void rcu_irq_enter_irqson(void) { } > static inline void rcu_irq_exit(void) { } > +static inline void rcu_irq_exit_preempt(void) { } > static inline void exit_rcu(void) { } > static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t) > { > --- a/include/linux/rcutree.h > +++ b/include/linux/rcutree.h > @@ -46,6 +46,7 @@ void rcu_idle_enter(void); > void rcu_idle_exit(void); > void rcu_irq_enter(void); > void rcu_irq_exit(void); > +void rcu_irq_exit_preempt(void); > void rcu_irq_enter_irqson(void); > void rcu_irq_exit_irqson(void); > > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -706,6 +706,27 @@ void noinstr rcu_irq_exit(void) > rcu_nmi_exit(); > } > > +/** > + * rcu_irq_exit_preempt - Inform RCU that current CPU is exiting irq > + * towards in kernel preemption > + * > + * Same as rcu_irq_exit() but has a sanity check that scheduling is safe > + * from RCU point of view. Invoked from return from interrupt before kernel > + * preemption. > + */ > +void rcu_irq_exit_preempt(void) > +{ > + lockdep_assert_irqs_disabled(); > + rcu_nmi_exit(); > + > + RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) <= 0, > + "RCU dynticks_nesting counter underflow/zero!"); > + RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) <= 0, > + "RCU dynticks_nmi_nesting counter underflow/zero!"); > + RCU_LOCKDEP_WARN(rcu_dynticks_curr_cpu_in_eqs(), > + "RCU in extended quiescent state!"); > +} > + > /* > * Wrapper for rcu_irq_exit() where interrupts are enabled. > * >