Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp5540976rwl; Mon, 3 Apr 2023 23:14:33 -0700 (PDT) X-Google-Smtp-Source: AKy350Z7tTFOqo/WIPRFgHLRU0uzCWpP3q1kCWJPtxv0dMSELMABGANtIacJne62dnYzJrz/XxFZ X-Received: by 2002:a05:6a20:c523:b0:d5:213a:476e with SMTP id gm35-20020a056a20c52300b000d5213a476emr1106135pzb.51.1680588873670; Mon, 03 Apr 2023 23:14:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680588873; cv=none; d=google.com; s=arc-20160816; b=bXrTYzpHg1B5hxALaEuG+NnjlwA5fSsVxzv0ebsnUA+pA9CZrSlxbHbw895VFo+EsN z4/6t6jEGWtwHqJeKFuwUAOwejp3XydgZJbPQHCtJslLo7zO3izqAQUD0wqvrmwEjpFU kMLKTcl2xpDYjURtjEAmGXcNdMck/BSSmpJdN19OzwctZdxbeTwqw+lTR/ry8z7fAR1b +zD9l8uVZ/H1pBvDo+KNG+yZvZdeItygBgPM5nd9JnlHMVXbxJrtqpwmeP7hrs7PbYZ7 OkIG4SvwoEbsZtkV2M1fa8qZ+GAXkXQ23ZtmUOHA1mGZXQUIUqynj4G+u8yKl83fFYqk pxRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:subject:from:cc:to :content-language:user-agent:mime-version:date:message-id; bh=u6b7fYElyp48YWhwojbBGtpnayQq6ybmcQTfmGJzjz8=; b=urO109MiLqXwr0EDokCPt4g1pjHE/lfiHspyxpfT/2F+PsABAl+yIlX/U1inq3+EDi W2if68N+n9Y1PAEQjrr+9tXDpS2mWftoEX9fVxKy5LemLe2Nq54pAmZNRYa9c/nX9GKn d9xc/7zIxc8Rx7KD37sN0sD6upzx461UL+zGJTj5ZCiMSMEK06fy1zIYxNq/wYaAgyd9 /p7kQDweAtimzP7aMgFQQSXzEipvqCq8JSCWJ3alChGRushSFPRzG3X3Uc4cngKV/xq0 LkXs4acJ4BUwP3VP2wlEHxqTOOQPXWBSk85WVWPDJPE6lBOuGx+vKVYXr+MxqCvS1yq/ dvIw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g4-20020aa79dc4000000b00625ea7dae84si9695857pfq.161.2023.04.03.23.14.20; Mon, 03 Apr 2023 23:14:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233664AbjDDGNJ (ORCPT + 99 others); Tue, 4 Apr 2023 02:13:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233658AbjDDGNI (ORCPT ); Tue, 4 Apr 2023 02:13:08 -0400 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA54DC2 for ; Mon, 3 Apr 2023 23:13:06 -0700 (PDT) Received: from fsav415.sakura.ne.jp (fsav415.sakura.ne.jp [133.242.250.114]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 3346D4oF070744; Tue, 4 Apr 2023 15:13:04 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav415.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav415.sakura.ne.jp); Tue, 04 Apr 2023 15:13:04 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav415.sakura.ne.jp) Received: from [192.168.1.6] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 3346D3NJ070741 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Tue, 4 Apr 2023 15:13:04 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Message-ID: <77e32fdc-3f63-e124-588e-7d60dd66fc9a@I-love.SAKURA.ne.jp> Date: Tue, 4 Apr 2023 15:13:05 +0900 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.1 Content-Language: en-US To: Ingo Molnar , Peter Zijlstra , Ingo Molnar Cc: Waiman Long , Will Deacon , Boqun Feng , Andrew Morton , LKML , Linus Torvalds , Hillf Danton From: Tetsuo Handa Subject: [PATCH v3 (repost)] locking/lockdep: add debug_show_all_lock_holders() Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, check_hung_uninterruptible_tasks() reports details of locks held in the system. Also, lockdep_print_held_locks() does not report details of locks held by a thread if that thread is in TASK_RUNNING state. Several years of experience of debugging without vmcore tells me that these limitations have been a barrier for understanding what went wrong in syzbot's "INFO: task hung in" reports. I initially thought that the cause of "INFO: task hung in" reports is due to over-stressing. But I understood that over-stressing is unlikely. I now consider that there likely is a deadlock/livelock bug where lockdep cannot report as a deadlock when "INFO: task hung in" is reported. A typical case is that thread-1 is waiting for something to happen (e.g. wait_event_*()) with a lock held. When thread-2 tries to hold that lock using e.g. mutex_lock(), check_hung_uninterruptible_tasks() reports that thread-2 is hung and thread-1 is holding a lock which thread-2 is trying to hold. But currently check_hung_uninterruptible_tasks() cannot report the exact location of thread-1 which gives us an important hint for understanding why thread-1 is holding that lock for so long period. When check_hung_uninterruptible_tasks() reports a thread waiting for a lock, it is important to report backtrace of threads which already held that lock. Therefore, allow check_hung_uninterruptible_tasks() to report the exact location of threads which is holding any lock. debug_show_all_lock_holders() skips current thread if the caller is holding no lock, for reporting RCU lock taken inside that function is generally useless. Signed-off-by: Tetsuo Handa --- I couldn't catch Peter's question at https://lkml.kernel.org/r/Y+oY3Xd43nNnkDSB@hirez.programming.kicks-ass.net . I consider that this patch as-is is helpful, for not all TASK_RUNNING threads are actually running on some CPU, aren't they? If we show backtrace of only TASK_RUNNING threads which are running on some CPU, we fail to get hints for TASK_RUNNING threads which are not running on some CPU. Therefore, I consider that showing backtrace of TASK_RUNNING threads which are not running on some CPU is better than not showing. Changes in v3: Unshare debug_show_all_lock_holders() and debug_show_all_locks(), suggested by Ingo Molnar . Changes in v2: Share debug_show_all_lock_holders() and debug_show_all_locks(), suggested by Waiman Long . include/linux/debug_locks.h | 5 +++++ kernel/hung_task.c | 2 +- kernel/locking/lockdep.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/include/linux/debug_locks.h b/include/linux/debug_locks.h index dbb409d77d4f..0567d5ce5b4a 100644 --- a/include/linux/debug_locks.h +++ b/include/linux/debug_locks.h @@ -50,6 +50,7 @@ extern int debug_locks_off(void); #ifdef CONFIG_LOCKDEP extern void debug_show_all_locks(void); extern void debug_show_held_locks(struct task_struct *task); +extern void debug_show_all_lock_holders(void); extern void debug_check_no_locks_freed(const void *from, unsigned long len); extern void debug_check_no_locks_held(void); #else @@ -61,6 +62,10 @@ static inline void debug_show_held_locks(struct task_struct *task) { } +static inline void debug_show_all_lock_holders(void) +{ +} + static inline void debug_check_no_locks_freed(const void *from, unsigned long len) { diff --git a/kernel/hung_task.c b/kernel/hung_task.c index 322813366c6c..12aa473b11bd 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -215,7 +215,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) unlock: rcu_read_unlock(); if (hung_task_show_lock) - debug_show_all_locks(); + debug_show_all_lock_holders(); if (hung_task_show_all_bt) { hung_task_show_all_bt = false; diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 50d4863974e7..208292813776 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -6512,6 +6513,33 @@ void debug_show_all_locks(void) pr_warn("=============================================\n\n"); } EXPORT_SYMBOL_GPL(debug_show_all_locks); + +void debug_show_all_lock_holders(void) +{ + struct task_struct *g, *p; + + if (unlikely(!debug_locks)) { + pr_warn("INFO: lockdep is turned off.\n"); + return; + } + pr_warn("\nShowing all threads with locks held in the system:\n"); + + rcu_read_lock(); + for_each_process_thread(g, p) { + if (!p->lockdep_depth) + continue; + if (p == current && p->lockdep_depth == 1) + continue; + sched_show_task(p); + lockdep_print_held_locks(p); + touch_nmi_watchdog(); + touch_all_softlockup_watchdogs(); + } + rcu_read_unlock(); + + pr_warn("\n"); + pr_warn("=============================================\n\n"); +} #endif /* -- 2.34.1