Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp2385198pxb; Sat, 25 Sep 2021 05:59:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxtKVq69VTRBIJ76hICJoZ9kShLbGAGWheUzzKCUPxmvoQ+C/iJJ21Nu0svTb2dPMprMNCu X-Received: by 2002:a17:906:2887:: with SMTP id o7mr16823293ejd.425.1632574772701; Sat, 25 Sep 2021 05:59:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632574772; cv=none; d=google.com; s=arc-20160816; b=0P3F6o4YKe4INbQVj6ZpcPVc799usVfYGFkZvTzeLkExafumxoiYn4bPERo47g2Ahi 6TNDM31evBcboLmtgCkoJjXSGnZ1Guw0O+njWGGUtNBFrQ1OOwHbGV/UnE64irIDaWtB yNJ5ALZ8OMdlMTsNkEMQpSzhyhRwQB51MtoT60yKSKKzvyLO5TGxkslkNGw4tjTS0qXn fFFTdK7cgq1hgnf+Z8JGOHegnu992UCVauyh/ehxbl3zKeSlb/xv/Z4A67wLD073W5wO 8MHyQ09/kTmuaO4iJPrKje9qfs95b9arfuE7wX3oU6zoVDKV6qdHuOZN7VUEw+obTt+G XG8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=aKcIe0yXSyCYrvkaDw9H7GeoqUx3FTjw+9Qud9YH4u4=; b=zLrfP3NsgrrKtIMvV7+bZipUnmiAeJhYZnTGbIVsHkXHAj2xOMFsZ7GNoKwH0CpO7c RVfp81S+B5MicmEfUxTVrcz40ybfut6ckBi0qGUIBDrMgbCZgNw8mNun3I7at9WVXYpi xtkIfCt9GDnFrsrrOGr28Jb5QHqVbpge5nfl/ssPTlOXWYs1nseBGR2agn/SXZx6za3Y WDJxLph94fDMyU3pvPcNqdIyC0ub8T9L5ZPXrVxNXvQyV+9ylnCgp/sglHwFX1ZoRCZA ta2HwUvf7fGX6lZkhHyQrpJKl2Z5HiwAqDLjQKPakXsQq1egz/5VQFO2XFPVtwGDcYOw QjxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=VhJ78QWJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 30si13039474ejl.658.2021.09.25.05.59.08; Sat, 25 Sep 2021 05:59:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=VhJ78QWJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245011AbhIYM5P (ORCPT + 99 others); Sat, 25 Sep 2021 08:57:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:60772 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234877AbhIYM5O (ORCPT ); Sat, 25 Sep 2021 08:57:14 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 22EB4610CF; Sat, 25 Sep 2021 12:55:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1632574540; bh=nfYw1aza6mD4lKuunNvDXwpq0ZgXwpRzUvT+rq3N9Oo=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=VhJ78QWJXLAR1I/ATgFasH8XRbCKbxeLN24zfmxA6QcAhyeMhpNUreMnsyvg78HGe V9DRfS89msTasQJVafrj+B2dJVjWIFHz9z33IeVg/BnHxYjKjvN6n/+ERjz+7z0sfk uXXwHEhqgk8uaO+h0JZo8MA+9K5hWIsmvyDE7AVlPG8MADzf6/6ZLCqDhBCAXCRhSd 5h6oOFLCKpeJRYe/bCv5pOKbAo+gK4YUYJzvHWd22dFUESIVy2zPchvb0PxUkbXp+P ZxlYJJ0vjQ3f8VE9S5TOosO2px8Sn+LHqwI0OaphXQTJupL0qBZ4+D25HHv2Dj0zV+ 2zYQ59sCV8x3A== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id DC93F5C08E8; Sat, 25 Sep 2021 05:55:39 -0700 (PDT) Date: Sat, 25 Sep 2021 05:55:39 -0700 From: "Paul E. McKenney" To: Waiman Long Cc: peterz@infradead.org, mingo@redhat.com, will@kernel.org, boqun.feng@gmail.com, linux-kernel@vger.kernel.org, richard@nod.at Subject: Re: Confusing lockdep splat Message-ID: <20210925125539.GP880162@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20210924210247.GA3877322@paulmck-ThinkPad-P17-Gen-1> <20210924224337.GL880162@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 24, 2021 at 08:30:20PM -0400, Waiman Long wrote: > On 9/24/21 6:43 PM, Paul E. McKenney wrote: > > On Fri, Sep 24, 2021 at 05:41:17PM -0400, Waiman Long wrote: > > > On 9/24/21 5:02 PM, Paul E. McKenney wrote: > > > > Hello! > > > > > > > > I got the lockdep splat below from an SRCU-T rcutorture run, which uses > > > > a !SMP !PREEMPT kernel. This is a random event, and about half the time > > > > it happens within an hour or two. My reproducer (on current -rcu "dev" > > > > branch for a 16-CPU system) is: > > > > > > > > tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 16 --configs "16*SRCU-T" --duration 7200 > > > > > > > > My points of confusion are as follows: > > > > > > > > 1. The locks involved in this deadlock cycle are irq-disabled > > > > raw spinlocks. The claimed deadlock cycle uses two CPUs. > > > > There is only one CPU. There is no possibility of preemption > > > > or interrupts. So how can this deadlock actually happen? > > > > > > > > 2. If there was more than one CPU, then yes, there would be > > > > a deadlock. The PI lock is acquired by the wakeup code after > > > > acquiring the workqueue lock, and rcutorture tests the new ability > > > > of the scheduler to hold the PI lock across rcu_read_unlock(), > > > > and while it is at it, across the rest of the unlock primitives. > > > > > > > > But if there was more than one CPU, Tree SRCU would be used > > > > instead of Tiny SRCU, and there would be no wakeup invoked from > > > > srcu_read_unlock(). > > > > > > > > Given only one CPU, there is no way to complete the deadlock > > > > cycle. > > > > > > > > For now, I am working around this by preventing rcutorture from holding > > > > the PI lock across Tiny srcu_read_unlock(). > > > > > > > > Am I missing something subtle here? > > > I would say that the lockdep code just doesn't have enough intelligence to > > > identify that deadlock is not possible in this special case. There are > > > certainly false positives, and it can be hard to get rid of them. > > Would it make sense for lockdep to filter out reports involving more > > than one CPU unless there is at least one sleeplock in the cycle? > > > > Of course, it gets more complicated when interrupts are involved... > > Actually, lockdep keeps track of all the possible lock orderings and put out > a splat whenever these lock orderings suggest that a circular deadlock is > possible. It doesn't keep track if a lock is sleepable or not. Also lockdep > deals with lock classes each of which can have many instances. So all the > pi_lock's in different task_struct's are all treated as the same lock from > lockdep's perspective. We can't treat all different instances separately or > we will run out of lockdep table space very quickly. We shouldn't need additional classes, but only instead a marking of a given lock class to tell whether or not it was a sleeplock. Either way, I now have a workaround within Tiny SRCU that appears to handle this case, so it is not as urgent as it might be. Thanx, Paul