Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp779866pxb; Thu, 19 Nov 2020 13:39:55 -0800 (PST) X-Google-Smtp-Source: ABdhPJyjjFre4WL9p/jR7fF9juIsMq/vYPfeIkSmsLv4zLWO7kg+Mu9CO9Gd5It+gtc5x4dr74FT X-Received: by 2002:a17:906:38db:: with SMTP id r27mr30679628ejd.328.1605821994993; Thu, 19 Nov 2020 13:39:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605821994; cv=none; d=google.com; s=arc-20160816; b=xIkR5NUcLg5zbUL+UjocKvtwXMwm9X/JF2OF+xUnHrwuqFT1JPCHqQ7k5uI9xF7QQn hIK53tqP18P6J7JtMGOK1jTJQTbjV/zSP3sVjc5bIs0YJN3Y8KV0+/DVstyNwHaPi1T8 cWs/53Rbhtuxy3hsQSFQGtkM31hYNK88rDT4qBCOEv22l+cQWWRJR72+ReuP+R7IXC6x c4sciZu0NjtV+EpdNE1Ize0Kns1jDUgH5hkvkg6ETBrOku4W4d2Yg1Q5LyYcQl5NMFNX W03TF7fba74VVqytUyfbGsEhNS3yMS9nophMUi5MXKk4Eifl+E8o9raUl0jHFD/bEZ60 aDng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=Lts6D61qcD81EoXKyIuB/di8I9f+6BnxhHNlIezApr4=; b=LdTzkyFcR4d7cpx3fOLqCDzdbBwGb0je1NxlW56IDJjZi/3MTX5bAs+GXWqTPuBHZx N00I9a5/hqygTeeebT1t1QlRaJLqgWnm2DeDGfl31HlYj2/HqthXkPVV6Wau5NpDjbhC f3LZ64tLBfEmzOnp4Zo/SJsd2K7XBreB3ODBCNXJOhlf3771o4nPQGH5nwKx/0QALWbX 0Z2ugeaf+GZrRd9X3fqEvMud7GtXle1JquQh9CVflWEdmQAClz4DV7N/6yFwhf7S1lcJ CZzq3LdBI/xPF8eEKvDR6prPEXrCF338UAX1KkOXLKWybF5f3KrOrKQXa9RM7pJfrCys ue0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="FjzSuS/v"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gr13si545696ejb.428.2020.11.19.13.39.30; Thu, 19 Nov 2020 13:39:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="FjzSuS/v"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726224AbgKSVfO (ORCPT + 99 others); Thu, 19 Nov 2020 16:35:14 -0500 Received: from mail.kernel.org ([198.145.29.99]:47034 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725877AbgKSVfO (ORCPT ); Thu, 19 Nov 2020 16:35:14 -0500 Received: from paulmck-ThinkPad-P72.home (50-39-104-11.bvtn.or.frontiernet.net [50.39.104.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CC7352222A; Thu, 19 Nov 2020 21:35:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605821712; bh=WiDazihmUqIbfDQNuj5oVU9c796OAQteSS6DruzFVVQ=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=FjzSuS/vurox71DLRNzjRg39KsUR34CUpNxVObvX5FV4wv3PP0o64vUT9+Dw1sBzt FZIThnUqycC7x7xGy8L3bjvSkry83weBw4JFoSKDMxC4H5ZktGT+rdSyMoNk2/qJJw tdb98ssEZERXZ0YrzHdwdsHaJxnjjeGk9Fg57cR4= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 75FDB35225D3; Thu, 19 Nov 2020 13:35:12 -0800 (PST) Date: Thu, 19 Nov 2020 13:35:12 -0800 From: "Paul E. McKenney" To: Marco Elver Cc: Steven Rostedt , Anders Roxell , Andrew Morton , Alexander Potapenko , Dmitry Vyukov , Jann Horn , Mark Rutland , Linux Kernel Mailing List , Linux-MM , kasan-dev , rcu@vger.kernel.org, Peter Zijlstra , Tejun Heo , Lai Jiangshan , linux-arm-kernel@lists.infradead.org Subject: Re: linux-next: stall warnings and deadlock on Arm64 (was: [PATCH] kfence: Avoid stalling...) Message-ID: <20201119213512.GB1437@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20201113175754.GA6273@paulmck-ThinkPad-P72> <20201117105236.GA1964407@elver.google.com> <20201117182915.GM1437@paulmck-ThinkPad-P72> <20201118225621.GA1770130@elver.google.com> <20201118233841.GS1437@paulmck-ThinkPad-P72> <20201119125357.GA2084963@elver.google.com> <20201119151409.GU1437@paulmck-ThinkPad-P72> <20201119170259.GA2134472@elver.google.com> <20201119184854.GY1437@paulmck-ThinkPad-P72> <20201119193819.GA2601289@elver.google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201119193819.GA2601289@elver.google.com> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 19, 2020 at 08:38:19PM +0100, Marco Elver wrote: > On Thu, Nov 19, 2020 at 10:48AM -0800, Paul E. McKenney wrote: > > On Thu, Nov 19, 2020 at 06:02:59PM +0100, Marco Elver wrote: [ . . . ] > > > I can try bisection again, or reverting some commits that might be > > > suspicious? But we'd need some selection of suspicious commits. > > > > The report claims that one of the rcu_node ->lock fields is held > > with interrupts enabled, which would indeed be bad. Except that all > > of the stack traces that it shows have these locks held within the > > scheduling-clock interrupt handler. Now with the "rcu: Don't invoke > > try_invoke_on_locked_down_task() with irqs disabled" but without the > > "sched/core: Allow try_invoke_on_locked_down_task() with irqs disabled" > > commit, I understand why. With both, I don't see how this happens. > > I'm at a loss, but happy to keep bisecting and trying patches. I'm also > considering: > > Is it the compiler? Probably not, I tried 2 versions of GCC. > > Can we trust lockdep to precisely know IRQ state? I know there's > been some recent work around this, but hopefully we're not > affected here? > > Is QEMU buggy? > > > At this point, I am reduced to adding lockdep_assert_irqs_disabled() > > calls at various points in that code, as shown in the patch below. > > > > At this point, I would guess that your first priority would be the > > initial bug rather than this following issue, but you never know, this > > might well help diagnose the initial bug. > > I don't mind either way. I'm worried deadlocking the whole system might > be worse. Here is another set of lockdep_assert_irqs_disabled() calls on the off-chance that they actually find something. Thanx, Paul ------------------------------------------------------------------------ commit bcca5277df3f24db15e15ccc8b05ecf346d05169 Author: Paul E. McKenney Date: Thu Nov 19 13:30:33 2020 -0800 rcu: Add lockdep_assert_irqs_disabled() to raw_spin_unlock_rcu_node() macros This commit adds a lockdep_assert_irqs_disabled() call to the helper macros that release the rcu_node structure's ->lock, namely to raw_spin_unlock_rcu_node(), raw_spin_unlock_irq_rcu_node() and raw_spin_unlock_irqrestore_rcu_node(). The point of this is to help track down a situation where lockdep appears to be insisting that interrupts are enabled while holding an rcu_node structure's ->lock. Link: https://lore.kernel.org/lkml/20201111133813.GA81547@elver.google.com/ Signed-off-by: Paul E. McKenney diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 59ef1ae..bf0827d 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -378,7 +378,11 @@ do { \ smp_mb__after_unlock_lock(); \ } while (0) -#define raw_spin_unlock_rcu_node(p) raw_spin_unlock(&ACCESS_PRIVATE(p, lock)) +#define raw_spin_unlock_rcu_node(p) \ +do { \ + lockdep_assert_irqs_disabled(); \ + raw_spin_unlock(&ACCESS_PRIVATE(p, lock)); \ +} while (0) #define raw_spin_lock_irq_rcu_node(p) \ do { \ @@ -387,7 +391,10 @@ do { \ } while (0) #define raw_spin_unlock_irq_rcu_node(p) \ - raw_spin_unlock_irq(&ACCESS_PRIVATE(p, lock)) +do { \ + lockdep_assert_irqs_disabled(); \ + raw_spin_unlock_irq(&ACCESS_PRIVATE(p, lock)); \ +} while (0) #define raw_spin_lock_irqsave_rcu_node(p, flags) \ do { \ @@ -396,7 +403,10 @@ do { \ } while (0) #define raw_spin_unlock_irqrestore_rcu_node(p, flags) \ - raw_spin_unlock_irqrestore(&ACCESS_PRIVATE(p, lock), flags) +do { \ + lockdep_assert_irqs_disabled(); \ + raw_spin_unlock_irqrestore(&ACCESS_PRIVATE(p, lock), flags); \ +} while (0) #define raw_spin_trylock_rcu_node(p) \ ({ \