Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp5940275pxb; Mon, 14 Feb 2022 11:13:07 -0800 (PST) X-Google-Smtp-Source: ABdhPJyEZwxjfPFN56cpGhiDdfooQRVOBVyi7zvY66zOT1G/LV83YAoUaM6+jN73YOYqIR0ElAKR X-Received: by 2002:a63:2f82:: with SMTP id v124mr397025pgv.139.1644865987407; Mon, 14 Feb 2022 11:13:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644865987; cv=none; d=google.com; s=arc-20160816; b=eZuFbQS28i9Ys7i3BQ6MD981ALD/zgwV1WOApN6Q0TR3fzE0MPzbzL8D9skvmm8Gec Itlq97ZWwbPrNinU/NtK0YSAxEFTj2pp9BgPtigDi3v45bDgy1Eh3Aq40cU6Sg5rZbVY O/mCJiHNhfCXAEaVhqXXn+BocUZAUSJFzpT0b3HonWY7m+5+kQbU7ucT6UtLwh7S9lJo XhGZjI5xD2yJ72xCOROV6N79IyczvGNYJW9qibbdc1pD7nwfo6zjw6oIdvJ4dVaZCjTB 4QkxcjkOiR6n6XJQ+Gv/4gu1kdwNBgF5kEZXWodz2DmNKKztGCborqNiUZqAhs/F0avF cKpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=3n+roxLItb8nDXOVVwmQ4Umx2UnbXvyGwqsWZh4hhDA=; b=d+k89Q6gqKr+bYJ7WuvhvOjPx1iU4yoYMP4C0nNWhyDUxBbWws2oY5u0rasdDAB0JK 4awRWEIsDl1QZ3dgSj/G9m7rNJoSMaU3nZmZNxinePndgZCWxSL/2vtCGjarRaB6jzxm NcsAS1uUOflMd6E0vVQfTuUahFS5GrDR7NA6no2ROwTODt4u6fnXOMgw4m9QGTc0TVcT ziAxcd6/Z5+f+2IXtwgHsVMRsf9EZXt+YYSSJ6aaEe8F+3k+t5ZQQcAb25LvtqxIEjhd WsoJ4z1LANJQchi7ODFiSo4tdnBMrrLNsLGiBG+Z4FEHMSvxSqOEBv/DNQBNx64Io93I 1MuA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id e8si557461pgc.243.2022.02.14.11.13.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Feb 2022 11:13:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 163D92DC7; Mon, 14 Feb 2022 11:07:11 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345253AbiBNNhX (ORCPT + 99 others); Mon, 14 Feb 2022 08:37:23 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:50174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245347AbiBNNhU (ORCPT ); Mon, 14 Feb 2022 08:37:20 -0500 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 156F825E89; Mon, 14 Feb 2022 05:37:11 -0800 (PST) Received: from fsav116.sakura.ne.jp (fsav116.sakura.ne.jp [27.133.134.243]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 21EDax4h064843; Mon, 14 Feb 2022 22:36:59 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav116.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav116.sakura.ne.jp); Mon, 14 Feb 2022 22:36:59 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav116.sakura.ne.jp) Received: from [192.168.1.9] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 21EDaxGx064835 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Mon, 14 Feb 2022 22:36:59 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Message-ID: <8ebd003c-f748-69b4-3a4f-fb80a3f39d36@I-love.SAKURA.ne.jp> Date: Mon, 14 Feb 2022 22:36:57 +0900 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.6.0 Subject: Re: [syzbot] possible deadlock in worker_thread Content-Language: en-US To: Tejun Heo Cc: Bart Van Assche , syzbot , jgg@ziepe.ca, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, syzkaller-bugs@googlegroups.com, Lai Jiangshan References: <0000000000005975a605d7aef05e@google.com> <8ea57ddf-a09c-43f2-4285-4dfb908ad967@acm.org> <71d6f14e-46af-cc5a-bc70-af1cdc6de8d5@acm.org> <309c86b7-2a4c-1332-585f-7bcd59cfd762@I-love.SAKURA.ne.jp> <2959649d-cfbc-bdf2-02ac-053b8e7af030@I-love.SAKURA.ne.jp> From: Tetsuo Handa In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022/02/14 12:44, Tejun Heo wrote: > Hello, > > On Mon, Feb 14, 2022 at 10:08:00AM +0900, Tetsuo Handa wrote: >> + destroy_workqueue(srp_tl_err_wq); >> >> Then, we can call WARN_ON() if e.g. flush_workqueue() is called on system-wide workqueues. > > Yeah, this is the right thing to do. It makes no sense at all to call > flush_workqueue() on the shared workqueues as the caller has no idea what > it's gonna end up waiting for. It was on my todo list a long while ago but > slipped through the crack. If anyone wanna take a stab at it (including > scrubbing the existing users, of course), please be my guest. > > Thanks. > OK. Then, I propose below patch. If you are OK with this approach, I can keep this via my tree as a linux-next only experimental patch for one or two weeks, in order to see if someone complains. From 95a3aa8d46c8479c95672305645247ba70312113 Mon Sep 17 00:00:00 2001 From: Tetsuo Handa Date: Mon, 14 Feb 2022 22:28:21 +0900 Subject: [PATCH] workqueue: Warn on flushing system-wide workqueues syzbot found a circular locking dependency which is caused by flushing system_long_wq WQ [1]. Tejun Heo commented that it makes no sense at all to call flush_workqueue() on the shared workqueues as the caller has no idea what it's gonna end up waiting for. Although there is flush_scheduled_work() which flushes system_wq WQ with "Think twice before calling this function! It's very easy to get into trouble if you don't take great care." warning message, it will be too difficult to guarantee that all users safely flush system-wide WQs. Therefore, let's change the direction to that developers had better use their own WQs if flushing is inevitable. To give developers time to update their modules, for now just emit a warning message when flush_workqueue() is called on system-wide WQs. We will eventually convert this warning message into WARN_ON() and kill flush_scheduled_work(). Link: https://syzkaller.appspot.com/bug?extid=831661966588c802aae9 [1] Signed-off-by: Tetsuo Handa --- kernel/workqueue.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 33f1106b4f99..5ef40b9a1842 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2805,6 +2805,37 @@ static bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq, return wait; } +static void warn_if_flushing_global_workqueue(struct workqueue_struct *wq) +{ +#ifdef CONFIG_PROVE_LOCKING + static DEFINE_RATELIMIT_STATE(flush_warn_rs, 600 * HZ, 1); + const char *name; + + if (wq == system_wq) + name = "system_wq"; + else if (wq == system_highpri_wq) + name = "system_highpri_wq"; + else if (wq == system_long_wq) + name = "system_long_wq"; + else if (wq == system_unbound_wq) + name = "system_unbound_wq"; + else if (wq == system_freezable_wq) + name = "system_freezable_wq"; + else if (wq == system_power_efficient_wq) + name = "system_power_efficient_wq"; + else if (wq == system_freezable_power_efficient_wq) + name = "system_freezable_power_efficient_wq"; + else + return; + ratelimit_set_flags(&flush_warn_rs, RATELIMIT_MSG_ON_RELEASE); + if (!__ratelimit(&flush_warn_rs)) + return; + pr_warn("Since system-wide WQ is shared, flushing system-wide WQ can introduce unexpected locking dependency. Please replace %s usage in your code with your local WQ.\n", + name); + dump_stack(); +#endif +} + /** * flush_workqueue - ensure that any scheduled work has run to completion. * @wq: workqueue to flush @@ -2824,6 +2855,8 @@ void flush_workqueue(struct workqueue_struct *wq) if (WARN_ON(!wq_online)) return; + warn_if_flushing_global_workqueue(wq); + lock_map_acquire(&wq->lockdep_map); lock_map_release(&wq->lockdep_map); -- 2.32.0