Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp243990pxu; Tue, 13 Oct 2020 23:22:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwxu3YI4e9v45Au7T9lQMUjtCL5TmHuCjg9+xGroxVsYiUheMjSEuewrYkW2w1hscD9MrvF X-Received: by 2002:aa7:d4d8:: with SMTP id t24mr3581056edr.247.1602656558079; Tue, 13 Oct 2020 23:22:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602656558; cv=none; d=google.com; s=arc-20160816; b=XcFQmRRNXrzwzZqcF0tfCoEddhj5Ccz2+iFH4hUtDa4fwKjd3EQ8Mj7nRA89OlH/O7 hGiOgPsVwSpHpgmMCNQ6OruOq1WCmJV4n+XrASPK4BDYsxUPLJXX3hN4xd3KjP61d6ra 7tJSwBQMedwA//gg24IpjObgnpaP3xTpuLrDyTFe3nUQ5dk6xsQJmFEKZoBRuwWrtvIL v1qc1ZQ76i1TVp6IZku+D7g15FhhBh8bgVXkYcGpoSmCnJQH07FMkalbtob3X0ZwOsWo 2or/XDwuHyAjQcquWDyCJJTMxZdMnE696TL9B25hJGf1dbyS780CL7e+CECMCPT2F1QG 9qUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=yGxGjTkkxrWDQogfhOJx12746bkFCoIWhsrl9Ojzm14=; b=laf9cQc0l/oItUCKIDew/hV+4kIQjt+cx9JfMXNwy7wYo+tmw3js7VrPL6S3D803OG 90S4VIqTzaoiAjWrL+3QsC6NZl0/0BnRGrZsVgNGBrjVn+IYm+oNV0dsaweACl7PKgS5 +h2DBFcqllodDY4h9G1T4eGNe8PBFYyNb8KvgHClKIOcWU5wJcRDKWdSdd6oiEWIVhT2 I6YI+dOv1jrN/7pZHXaqf5JiNf13rBbrgur2LWQyGcQZ6mFwkGvHfqo6MFr0Sjmp46M6 LMgXhnyDtWZAbdvV2nXe8QuDVCAK/XPhwg6BY99J1+v/Bsvojajwjh2P5nDjFGaxTHY4 Bayw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=R7sjL8Pg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l22si1435860ejc.719.2020.10.13.23.22.15; Tue, 13 Oct 2020 23:22:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=R7sjL8Pg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728241AbgJMTa0 (ORCPT + 99 others); Tue, 13 Oct 2020 15:30:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:35626 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725919AbgJMTa0 (ORCPT ); Tue, 13 Oct 2020 15:30:26 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-104-11.bvtn.or.frontiernet.net [50.39.104.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8179C208D5; Tue, 13 Oct 2020 19:30:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602617425; bh=sKfRn0G3r2QJ0n+zStMYMH4/lQ2Rvihck+3oEl1ZvRM=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=R7sjL8Pg4SIQAhpyidIjl4QmhLEOJyFBsROOvFP4YxaQAr2UtPR+9W1tq4+6ZT8cX OYUUCp0+T+8Z6y5dY6t7DTrbaXzs530PPuzZjONxbeNxqoEo0p4cpRLGqF9amoUePO hY0Sy02C5iFegmKKL4YvGB4MclKKd6/Fjx550IOY= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 3506E3522A39; Tue, 13 Oct 2020 12:30:25 -0700 (PDT) Date: Tue, 13 Oct 2020 12:30:25 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Boqun Feng , Qian Cai , Steven Rostedt , Ingo Molnar , x86 , linux-kernel@vger.kernel.org, linux-tip-commits@vger.kernel.org, Linux Next Mailing List , Stephen Rothwell Subject: Re: [tip: locking/core] lockdep: Fix lockdep recursion Message-ID: <20201013193025.GA2424@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <160223032121.7002.1269740091547117869.tip-bot2@tip-bot2> <20201012031110.GA39540@debian-boqun.qqnc3lrjykvubdpftowmye0fmh.lx.internal.cloudapp.net> <20201012212812.GH3249@paulmck-ThinkPad-P72> <20201013103406.GY2628@hirez.programming.kicks-ass.net> <20201013104450.GQ2651@hirez.programming.kicks-ass.net> <20201013112544.GZ2628@hirez.programming.kicks-ass.net> <20201013162650.GN3249@paulmck-ThinkPad-P72> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201013162650.GN3249@paulmck-ThinkPad-P72> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 13, 2020 at 09:26:50AM -0700, Paul E. McKenney wrote: > On Tue, Oct 13, 2020 at 01:25:44PM +0200, Peter Zijlstra wrote: > > On Tue, Oct 13, 2020 at 12:44:50PM +0200, Peter Zijlstra wrote: > > > On Tue, Oct 13, 2020 at 12:34:06PM +0200, Peter Zijlstra wrote: > > > > On Mon, Oct 12, 2020 at 02:28:12PM -0700, Paul E. McKenney wrote: > > > > > It is certainly an accident waiting to happen. Would something like > > > > > the following make sense? > > > > > > > > Sadly no. > > > > > > > > > ------------------------------------------------------------------------ > > > > > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > > > index bfd38f2..52a63bc 100644 > > > > > --- a/kernel/rcu/tree.c > > > > > +++ b/kernel/rcu/tree.c > > > > > @@ -4067,6 +4067,7 @@ void rcu_cpu_starting(unsigned int cpu) > > > > > > > > > > rnp = rdp->mynode; > > > > > mask = rdp->grpmask; > > > > > + lockdep_off(); > > > > > raw_spin_lock_irqsave_rcu_node(rnp, flags); > > > > > WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask); > > > > > newcpu = !(rnp->expmaskinitnext & mask); > > > > > @@ -4086,6 +4087,7 @@ void rcu_cpu_starting(unsigned int cpu) > > > > > } else { > > > > > raw_spin_unlock_irqrestore_rcu_node(rnp, flags); > > > > > } > > > > > + lockdep_on(); > > > > > smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ > > > > > } > > > > > > > > This will just shut it up, but will not fix the actual problem of that > > > > spin-lock ending up in trace_lock_acquire() which relies on RCU which > > > > isn't looking. > > > > > > > > What we need here is to supress tracing not lockdep. Let me consider. > > > > > > We appear to have a similar problem with rcu_report_dead(), it's > > > raw_spin_unlock()s can end up in trace_lock_release() while we just > > > killed RCU. > > > > So we can deal with the explicit trace_*() calls like the below, but I > > really don't like it much. It also doesn't help with function tracing. > > This is really early/late in the hotplug cycle and should be considered > > entry, we shouldn't be tracing anything here. > > > > Paul, would it be possible to use a scheme similar to IRQ/NMI for > > hotplug? That seems to mostly rely on atomic ops, not locks. > > The rest of the rcu_node tree and the various grace-period/hotplug races > makes that question non-trivial. I will look into it, but I have no > reason for optimism. > > But there is only one way to find out... ;-) The aforementioned races get really ugly really fast. So I do not believe that a lockless approach is a strategy to win here. But why not use something sort of like a sequence counter, but adapted for local on-CPU use? This should quiet the diagnostics for the full time that RCU needs its locks. Untested patch below. Thoughts? Thanx, Paul ------------------------------------------------------------------------ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 1d42909..5b06886 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1152,13 +1152,15 @@ bool rcu_lockdep_current_cpu_online(void) struct rcu_data *rdp; struct rcu_node *rnp; bool ret = false; + unsigned long seq; if (in_nmi() || !rcu_scheduler_fully_active) return true; preempt_disable_notrace(); rdp = this_cpu_ptr(&rcu_data); rnp = rdp->mynode; - if (rdp->grpmask & rcu_rnp_online_cpus(rnp)) + seq = READ_ONCE(rnp->ofl_seq) & ~0x1; + if (rdp->grpmask & rcu_rnp_online_cpus(rnp) || seq != READ_ONCE(rnp->ofl_seq)) ret = true; preempt_enable_notrace(); return ret; @@ -4065,6 +4067,8 @@ void rcu_cpu_starting(unsigned int cpu) rnp = rdp->mynode; mask = rdp->grpmask; + WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1); + WARN_ON_ONCE(!(rnp->ofl_seq & 0x1)); raw_spin_lock_irqsave_rcu_node(rnp, flags); WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask); newcpu = !(rnp->expmaskinitnext & mask); @@ -4084,6 +4088,8 @@ void rcu_cpu_starting(unsigned int cpu) } else { raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } + WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1); + WARN_ON_ONCE(rnp->ofl_seq & 0x1); smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ } @@ -4111,6 +4117,8 @@ void rcu_report_dead(unsigned int cpu) /* Remove outgoing CPU from mask in the leaf rcu_node structure. */ mask = rdp->grpmask; + WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1); + WARN_ON_ONCE(!(rnp->ofl_seq & 0x1)); raw_spin_lock(&rcu_state.ofl_lock); raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */ rdp->rcu_ofl_gp_seq = READ_ONCE(rcu_state.gp_seq); @@ -4123,6 +4131,8 @@ void rcu_report_dead(unsigned int cpu) WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext & ~mask); raw_spin_unlock_irqrestore_rcu_node(rnp, flags); raw_spin_unlock(&rcu_state.ofl_lock); + WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1); + WARN_ON_ONCE(rnp->ofl_seq & 0x1); rdp->cpu_started = false; } diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 805c9eb..7d802b6 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -57,6 +57,7 @@ struct rcu_node { /* beginning of each grace period. */ unsigned long qsmaskinitnext; /* Online CPUs for next grace period. */ + unsigned long ofl_seq; /* CPU-hotplug operation sequence count. */ unsigned long expmask; /* CPUs or groups that need to check in */ /* to allow the current expedited GP */ /* to complete. */