Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3595415ybd; Fri, 28 Jun 2019 11:21:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqxss0z+97p1lZBa+Sc8QXdYOEZaz6CamHXeRyUlc0mx3Tp4MeVWwd3EdbBGTsCP/1Wpv6bE X-Received: by 2002:a17:902:9896:: with SMTP id s22mr2368310plp.4.1561746076862; Fri, 28 Jun 2019 11:21:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561746076; cv=none; d=google.com; s=arc-20160816; b=aesR/GlK0/SHpP1grpzEsUZ0VpOjFnsUClZ2ZcQj1k2itWMVlFpOFLN4/knTTTqgRX bMFrYVgbQtaaeqIEk2JIyZscAc4fpAYxtvJsduDkSR2p8qEHmXbW8y3KyVdy0/biWHcF hJ279xCGEAkauJiICigYzizw3OrJjaaDaxunj4INWHCJoW6cUejC54rrDaB8fV4TiQZf PD5Skg3z+vLCweC+tMqZ8Z0zuGxXdCYPm/GNpFKv+B1/vExb/4vVBd7aue4P/1si/NRb z20hKk9IDWuDHpTfciR2oiy57/G7/eQOXW48PBap1UIZ//IYfFsi7nUOWxrEh7egCY1x vukw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=vUSJT0Jt+bCjKK5T1ycUETHWnScsq8DR5t0+iDvhbDI=; b=dSgxQNQ+B8ZMFH6GdNm+TaawmoG8G4wgIWKM1XcBBuUN7L7J3l9KHYJdMW2VyzvOvy bZDeVcnwYCoJNrmjYMGG1meKEUNh7lim06DHz7InbTAR4D+vPeHBjsZe1OESn/q3VPpA WrlgSR2/XrKCp74T7nxfYoBLRCRmFWwod48KwSff3Dy6acLZEAQAT8DTgPL5L/iBMZi3 LrpvkYkdPIgahlOqc0TMQNhTsv4nOLIn553INVXjkevJdRCPjwTjnNkqvShuC6gHmveZ m3x7Fl3NRKwCtNAs6yNZecvwlVklywcu5nlDh1gw94aLXLdzfPawxJNJfJe0ifvBWyij 46og== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k7si2737078pll.145.2019.06.28.11.20.58; Fri, 28 Jun 2019 11:21:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726536AbfF1SUz convert rfc822-to-8bit (ORCPT + 99 others); Fri, 28 Jun 2019 14:20:55 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:37501 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726056AbfF1SUy (ORCPT ); Fri, 28 Jun 2019 14:20:54 -0400 Received: from bigeasy by Galois.linutronix.de with local (Exim 4.80) (envelope-from ) id 1hgvUL-00044w-Ua; Fri, 28 Jun 2019 20:20:45 +0200 Date: Fri, 28 Jun 2019 20:20:45 +0200 From: Sebastian Andrzej Siewior To: Joel Fernandes Cc: "Paul E. McKenney" , Steven Rostedt , rcu , LKML , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Josh Triplett , Mathieu Desnoyers , Lai Jiangshan Subject: Re: [RFC] Deadlock via recursive wakeup via RCU with threadirqs Message-ID: <20190628182045.ow4i5cncauk2jxjl@linutronix.de> References: <20190627155506.GU26519@linux.ibm.com> <20190627173831.GW26519@linux.ibm.com> <20190627181638.GA209455@google.com> <20190627184107.GA26519@linux.ibm.com> <20190628164008.GB240964@google.com> <20190628164559.GC240964@google.com> <20190628173011.GX26519@linux.ibm.com> <20190628174545.pwgwi3wxl2eapkvm@linutronix.de> <20190628180727.GE240964@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8BIT In-Reply-To: <20190628180727.GE240964@google.com> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-06-28 14:07:27 [-0400], Joel Fernandes wrote: > On Fri, Jun 28, 2019 at 07:45:45PM +0200, Sebastian Andrzej Siewior wrote: > > On 2019-06-28 10:30:11 [-0700], Paul E. McKenney wrote: > > > > I believe the .blocked field remains set even though we are not any more in a > > > > reader section because of deferred processing of the blocked lists that you > > > > mentioned yesterday. > > > > > > That can indeed happen. However, in current -rcu, that would mean > > > that .deferred_qs is also set, which (if in_irq()) would prevent > > > the raise_softirq_irqsoff() from being invoked. Which was why I was > > > asking the questions about whether in_irq() returns true within threaded > > > interrupts yesterday. If it does, I need to find if there is some way > > > of determining whether rcu_read_unlock_special() is being called from > > > a threaded interrupt in order to suppress the call to raise_softirq() > > > in that case. > > > > Please not that: > > | void irq_exit(void) > > | { > > |… > > in_irq() returns true > > | preempt_count_sub(HARDIRQ_OFFSET); > > in_irq() returns false > > | if (!in_interrupt() && local_softirq_pending()) > > | invoke_softirq(); > > > > -> invoke_softirq() does > > | if (!force_irqthreads) { > > | __do_softirq(); > > | } else { > > | wakeup_softirqd(); > > | } > > > > In my traces which I shared previous email, the wakeup_softirqd() gets > called. > > I thought force_irqthreads value is decided at boot time, so I got lost a bit > with your comment. It does. I just wanted point out that in this case rcu_unlock() / rcu_read_unlock_special() won't see in_irq() true. Sebastian