Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp131972pxu; Tue, 13 Oct 2020 19:11:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwJLhMSSGlKVJWyngj4p52CkBr7244FymiJ4gDqx3BGizIWmTfFsiuMxWJaOouDbw8MPLz/ X-Received: by 2002:a50:ee19:: with SMTP id g25mr2883008eds.160.1602641479183; Tue, 13 Oct 2020 19:11:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602641479; cv=none; d=google.com; s=arc-20160816; b=jkJAdbjjg17I6MPbBhPeJ9d5lxPO4K2Waa3n3/FNQw3HFyPHIBeuLf4ipM94ja2bE4 rY0VIDOed+OswnfpBZ800/sLvmMMeRLBltrqDO6v9xh14FIKPmQRAeWJN6FFeFaWTHFn H0z1eyfssM9Axk/j2QtgNum4cbiZ4QFvsgQha6aikP1sj4fP5PaZYzcxIvWZ+6kJpmAd MwtTPOfS9DTWTnSxf4HoS6gYJeFJPtictuqGttDIQQ6RD5xTYkfLBjH5sCbQfy9aDaVz OaT+LW97Ur6JLmTtWvZ9IwAV6pRtNJKkRufgbUORGqM5kr1w8nTdIjpXsb4Wjm+Wheuz 0w5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=MknLJeIOokNdx89uFTEE95pYTQw7T1u+sNx+e5oMXws=; b=pnF7n6bxjceEzuYXlajuvGGVgLxMl0PWjibKLHXvDFXOlpPJT4C1b9XPwkNy2Cmag/ 8ZyvL5fNTarvFToCFhhH568eX6vEAyY00/YEs4MOEZukRTjf6A208Do5BOpE/Hea33Pr eHbDU6PRu5GI/N633quoHxYLmcrWiD5zXhw7n5Uc/cfNXBChni7iaxZRLifnSW3iNULo Zi9LbcWQnoEH7aXof6HhC6ratBFGxCC7mirDClhnS6T41CupkGwulTVawJCx46nojlgS Zbzaj50ewYQ2Y3uId8Rb3wkrwIVBMw3zpYuVfXMc+jrd/wYOtbcTYWOKDJUlZrtpEDb1 sUVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=SGB0+Wsm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id si11si1107341ejb.693.2020.10.13.19.10.31; Tue, 13 Oct 2020 19:11:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=SGB0+Wsm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387950AbgJMQP5 (ORCPT + 99 others); Tue, 13 Oct 2020 12:15:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:48352 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731582AbgJMQP5 (ORCPT ); Tue, 13 Oct 2020 12:15:57 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-104-11.bvtn.or.frontiernet.net [50.39.104.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7AD9A2526E; Tue, 13 Oct 2020 16:15:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602605756; bh=2FeyDJsEWNGrdFZRgLBvWdBX2L626cX0ipXoxPIOuus=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=SGB0+WsmLKBc7a6Pkn4OD1YRZ1e7DCtYTqczdhllGoZo84ACimkEWp1XU23hk66wc ABB+z0tfvmQfFoWUWCfmz+dBpoh2LNwkFQ05Y3UcGS8IMhBp2H0R526FRvfGHe1hhH PXXdSc9udfKoUi09mJDK4cC2a2UWCTjmuu3sod9s= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 111D73522A36; Tue, 13 Oct 2020 09:15:56 -0700 (PDT) Date: Tue, 13 Oct 2020 09:15:56 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Boqun Feng , Qian Cai , Steven Rostedt , Ingo Molnar , x86 , linux-kernel@vger.kernel.org, linux-tip-commits@vger.kernel.org, Linux Next Mailing List , Stephen Rothwell Subject: Re: [tip: locking/core] lockdep: Fix lockdep recursion Message-ID: <20201013161556.GM3249@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <160223032121.7002.1269740091547117869.tip-bot2@tip-bot2> <20201012031110.GA39540@debian-boqun.qqnc3lrjykvubdpftowmye0fmh.lx.internal.cloudapp.net> <20201012212812.GH3249@paulmck-ThinkPad-P72> <20201013103406.GY2628@hirez.programming.kicks-ass.net> <20201013104450.GQ2651@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201013104450.GQ2651@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 13, 2020 at 12:44:50PM +0200, Peter Zijlstra wrote: > On Tue, Oct 13, 2020 at 12:34:06PM +0200, Peter Zijlstra wrote: > > On Mon, Oct 12, 2020 at 02:28:12PM -0700, Paul E. McKenney wrote: > > > It is certainly an accident waiting to happen. Would something like > > > the following make sense? > > > > Sadly no. Hey, I was hoping! ;-) > > > ------------------------------------------------------------------------ > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > index bfd38f2..52a63bc 100644 > > > --- a/kernel/rcu/tree.c > > > +++ b/kernel/rcu/tree.c > > > @@ -4067,6 +4067,7 @@ void rcu_cpu_starting(unsigned int cpu) > > > > > > rnp = rdp->mynode; > > > mask = rdp->grpmask; > > > + lockdep_off(); > > > raw_spin_lock_irqsave_rcu_node(rnp, flags); > > > WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask); > > > newcpu = !(rnp->expmaskinitnext & mask); > > > @@ -4086,6 +4087,7 @@ void rcu_cpu_starting(unsigned int cpu) > > > } else { > > > raw_spin_unlock_irqrestore_rcu_node(rnp, flags); > > > } > > > + lockdep_on(); > > > smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ > > > } > > > > This will just shut it up, but will not fix the actual problem of that > > spin-lock ending up in trace_lock_acquire() which relies on RCU which > > isn't looking. > > > > What we need here is to supress tracing not lockdep. Let me consider. OK, I certainly didn't think in those terms. > We appear to have a similar problem with rcu_report_dead(), it's > raw_spin_unlock()s can end up in trace_lock_release() while we just > killed RCU. In theory, rcu_report_dead() is just fine. The reason is that a new grace period that is ignoring the outgoing CPU cannot start until after: 1. This CPU releases the leaf rcu_node ->lock -and- 2. The grace-period kthread acquires this same lock. Multiple times. In practice, too bad about those diagnostics! :-( So good catch!!! Thanx, Paul