Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp3149498pxx; Mon, 2 Nov 2020 00:59:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJy1x7BQd/Tu1Z846k/6lVULV/lVd+EmrFMZWOvgbM8tkMQSoKsV5YWl81hOR5s/tAwcyBjj X-Received: by 2002:a17:906:c1c7:: with SMTP id bw7mr11914371ejb.290.1604307569791; Mon, 02 Nov 2020 00:59:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604307569; cv=none; d=google.com; s=arc-20160816; b=K81sKeMok3Av0h0HfHksCm2p5MMm4EFL2F6plB+JL3sxXUdlGZT3hhoK6t4MjCqBJB xw8C+4JzApN4F9+If2/orv/4dHTexz/mRwkVL1sSEvzDnc/8Frj9VuLNNClbPa1wEhZP bB36TfCloHqGVaPRYdAZ9AuJuhAdnMWn8DThl0ZsL5T2J9vl+jMPWdwApl0Nbx0ZLEcg Q4KGPwoneICL1qP6zPIDBUiMT0ZbiHA2oFkbc2NZ4cqC9mk3DpwtFT3WJqsI24IXjvuY tl6b0SuK69Z2bvYCMBnc1XxY6snrzD6nlVy1vGORdcDBz4AEfFr6fDQM36e0io3ZnEDA EXjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=TcCs49lCvNnNIJC0Xon+8MHup7wVcWYUvc2lkyC6FzE=; b=GaqGV8dRCjsrxFP+pk8j+591NI2wFjOj3Bkd9D4Fbzcg/2epFhziv2NSsxCULy6Bwm 2enM5e6eVM9MedCd8yjqDvptIQo9IFNGZ8gEtTpYvKoh3KufwpcOWIOcEpP8PmI5OaBy jZSsjzzh88hd2JHdCX/VuCLE0aVltffhDoFKifmVl6IZ35LNnIvcR2/miIQRMZANg9j7 YQtTPplT/tRAH7zFg42jmfLoJuCBB87sgi83osAd1+qSyFO7Sw6Du9Ubyn6sGx7TLYrx tpT36+R2/f6KHhjlmTLsEhtsdTEu+eSjRl80p0VIrXNyNswQYARbcoq4u660/Ya0JDTe Af8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k2si9776134ejs.236.2020.11.02.00.59.07; Mon, 02 Nov 2020 00:59:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728005AbgKBI5C (ORCPT + 99 others); Mon, 2 Nov 2020 03:57:02 -0500 Received: from jabberwock.ucw.cz ([46.255.230.98]:46170 "EHLO jabberwock.ucw.cz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727806AbgKBI5C (ORCPT ); Mon, 2 Nov 2020 03:57:02 -0500 Received: by jabberwock.ucw.cz (Postfix, from userid 1017) id B132F1C0B7D; Mon, 2 Nov 2020 09:56:58 +0100 (CET) Date: Mon, 2 Nov 2020 09:56:58 +0100 From: Pavel Machek To: Andrea Righi Cc: Boqun Feng , Peter Zijlstra , Ingo Molnar , Will Deacon , linux-kernel@vger.kernel.org Subject: Re: lockdep: possible irq lock inversion dependency detected (trig->leddev_list_lock) Message-ID: <20201102085658.GA5506@amd> References: <20201101092614.GB3989@xps-13-7390> <20201031101740.GA1875@boqun-laptop.fareast.corp.microsoft.com> <20201102073328.GA9930@xps-13-7390> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="DocE+STaALJfprDB" Content-Disposition: inline In-Reply-To: <20201102073328.GA9930@xps-13-7390> User-Agent: Mutt/1.5.23 (2014-03-12) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --DocE+STaALJfprDB Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi! > > > I'm getting the following lockdep splat (see below). > > >=20 > > > Apparently this warning starts to be reported after applying: > > >=20 > > > e918188611f0 ("locking: More accurate annotations for read_lock()") > > >=20 > > > It looks like a false positive to me, but it made me think a bit and > > > IIUC there can be still a potential deadlock, even if the deadlock > > > scenario is a bit different than what lockdep is showing. > > >=20 > > > In the assumption that read-locks are recursive only in_interrupt() > > > context (as stated in e918188611f0), the following scenario can still > > > happen: > > >=20 > > > CPU0 CPU1 > > > ---- ---- > > > read_lock(&trig->leddev_list_lock); > > > write_lock(&trig->leddev_li= st_lock); > > > > > > kbd_bh() > > > -> read_lock(&trig->leddev_list_lock); > > >=20 > > > *** DEADLOCK *** > > >=20 > > > The write-lock is waiting on CPU1 and the second read_lock() on CPU0 > > > would be blocked by the write-lock *waiter* on CPU1 =3D> deadlock. > > >=20 > >=20 > > No, this is not a deadlock, as a write-lock waiter only blocks > > *non-recursive* readers, so since the read_lock() in kbd_bh() is called > > in soft-irq (which in_interrupt() returns true), so it's a recursive > > reader and won't get blocked by the write-lock waiter. >=20 > That's right, I was missing that in_interrupt() returns true also from > soft-irq context. >=20 > > > In that case we could prevent this deadlock condition using a workque= ue > > > to call kbd_propagate_led_state() instead of calling it directly from > > > kbd_bh() (even if lockdep would still report the false positive). > > >=20 > >=20 > > The deadlock senario reported by the following splat is: > >=20 > > =09 > > CPU 0: CPU 1: CPU 2: > > ----- ----- ----- > > led_trigger_event(): > > read_lock(&trig->leddev_list_lock); > > > > ata_hsm_qs_complete(): > > spin_lock_irqsave(&host->lock); > > write_lock(&trig->leddev_list_lock); > > ata_port_freeze(): > > ata_do_link_abort(): > > ata_qc_complete(): > > ledtrig_disk_activity(): > > led_trigger_blink_oneshot(): > > read_lock(&trig->leddev_list_lock); > > // ^ not in in_interrupt() context, so could get blocked by C= PU 2 > > > > ata_bmdma_interrupt(): > > spin_lock_irqsave(&host->lock); > > =20 > > , where CPU 0 is blocked by CPU 1 because of the spin_lock_irqsave() in > > ata_bmdma_interrupt() and CPU 1 is blocked by CPU 2 because of the > > read_lock() in led_trigger_blink_oneshot() and CPU 2 is blocked by CPU 0 > > because of an arbitrary writer on &trig->leddev_list_lock. > >=20 > > So I don't think it's false positive, but I might miss something > > obvious, because I don't know what the code here actually does ;-) >=20 > With the CPU2 part it all makes sense now and lockdep was right. :) >=20 > At this point I think we could just schedule a separate work to do the > led trigger and avoid calling it with host->lock held and that should > prevent the deadlock. I'll send a patch to do that. Let's... not do that, unless we have no choice. Would it help if leddev_list_lock used _irqsave() locking? Best regards, Pavel --=20 http://www.livejournal.com/~pavelmachek --DocE+STaALJfprDB Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAl+fydkACgkQMOfwapXb+vKf+wCfdYwRtTj2PLSkRhQL6soc71om CnwAnAkbcvu35FFbYEAURUuG1ZIvbjp4 =OOW/ -----END PGP SIGNATURE----- --DocE+STaALJfprDB--