Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp725672imm; Thu, 13 Sep 2018 06:53:30 -0700 (PDT) X-Google-Smtp-Source: ANB0VdborhziLb+9OMNEra6IK1v8GXxNbKOpUrfZsP/hfB3DOlMnR86EHJIJS4tczaeYmidW7+4W X-Received: by 2002:a63:6054:: with SMTP id u81-v6mr7363731pgb.433.1536846810720; Thu, 13 Sep 2018 06:53:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536846810; cv=none; d=google.com; s=arc-20160816; b=MjQY9XSI0aZQBm4B+b9PoGtsO+4Td3iJXVVuxDRhh4BgDIq8z367tzkdbKMMXITz5L 5EtakLiv959yN+SMqpYx3W9wXeZ8IlE3zV2JSnclur2Ew5AbmxQEH09ZxMxxHrxWNi8W GQzsk37fXKsYN//guYA+CNEqRxyVCzDOZ+5y+iySQfCVBrpezW5PQcb4nXYtVz3syxoE dbUTetWL/jyf6WYq042RBGeU15wzOROx7oLadPHVy4xgHlJE31pu+2vZhB2/GmHxdV5Q G0umwL40Fzk4R4se3fjJ3VL4bZwMJEMPqy9N1DgDFvVDBUyQNH39r2yxJzm5l1rsk+ZJ X39A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=BMBFspEYwlVOlqOZVK35OQ5n1UCAmZpFzrFqys9a3Y8=; b=V7OXB8EawEwNTFgMiQktrszcp2X01aG7ipHGaHsTk/VQSKTRdfjMB+8vFhRUGImUZa 20n00xqZm+UXOfgxqTpytEeDUYG71xBdwpkcE+ooRQ0o+QFd5lTwAXRytvVaNqPMI00q /hin2svsnYWKAPPuKTuYY+zS68vUCxCnttud7iVO09HzmuvI4jSFfyWzAatQGW0p5WR0 bmGz3MFN/iTUW1PzkSfHFtb4tNlumD0AomUfN/YhvovJhmOsYNojghU/HYikfY7mty5I OhwI6hpgE6NQoXUGhoE6RgLjdlGWxphbFGjojuLItXy0YGYvcN7GPmHpzzoA5S9r7eGY eKfA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e20-v6si4254801pgm.172.2018.09.13.06.53.16; Thu, 13 Sep 2018 06:53:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731078AbeIMTCm (ORCPT + 99 others); Thu, 13 Sep 2018 15:02:42 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:33634 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730002AbeIMTCl (ORCPT ); Thu, 13 Sep 2018 15:02:41 -0400 Received: from localhost (ip-213-127-77-73.ip.prioritytelecom.net [213.127.77.73]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 0E46CD19; Thu, 13 Sep 2018 13:53:05 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Berg , Tejun Heo , Sasha Levin Subject: [PATCH 4.18 049/197] workqueue: skip lockdep wq dependency in cancel_work_sync() Date: Thu, 13 Sep 2018 15:29:58 +0200 Message-Id: <20180913131843.501619084@linuxfoundation.org> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180913131841.568116777@linuxfoundation.org> References: <20180913131841.568116777@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Johannes Berg [ Upstream commit d6e89786bed977f37f55ffca11e563f6d2b1e3b5 ] In cancel_work_sync(), we can only have one of two cases, even with an ordered workqueue: * the work isn't running, just cancelled before it started * the work is running, but then nothing else can be on the workqueue before it Thus, we need to skip the lockdep workqueue dependency handling, otherwise we get false positive reports from lockdep saying that we have a potential deadlock when the workqueue also has other work items with locking, e.g. work1_function() { mutex_lock(&mutex); ... } work2_function() { /* nothing */ } other_function() { queue_work(ordered_wq, &work1); queue_work(ordered_wq, &work2); mutex_lock(&mutex); cancel_work_sync(&work2); } As described above, this isn't a problem, but lockdep will currently flag it as if cancel_work_sync() was flush_work(), which *is* a problem. Signed-off-by: Johannes Berg Signed-off-by: Tejun Heo Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- kernel/workqueue.c | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2843,7 +2843,8 @@ reflush: } EXPORT_SYMBOL_GPL(drain_workqueue); -static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr) +static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, + bool from_cancel) { struct worker *worker = NULL; struct worker_pool *pool; @@ -2885,7 +2886,8 @@ static bool start_flush_work(struct work * workqueues the deadlock happens when the rescuer stalls, blocking * forward progress. */ - if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) { + if (!from_cancel && + (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)) { lock_map_acquire(&pwq->wq->lockdep_map); lock_map_release(&pwq->wq->lockdep_map); } @@ -2896,6 +2898,22 @@ already_gone: return false; } +static bool __flush_work(struct work_struct *work, bool from_cancel) +{ + struct wq_barrier barr; + + if (WARN_ON(!wq_online)) + return false; + + if (start_flush_work(work, &barr, from_cancel)) { + wait_for_completion(&barr.done); + destroy_work_on_stack(&barr.work); + return true; + } else { + return false; + } +} + /** * flush_work - wait for a work to finish executing the last queueing instance * @work: the work to flush @@ -2909,18 +2927,7 @@ already_gone: */ bool flush_work(struct work_struct *work) { - struct wq_barrier barr; - - if (WARN_ON(!wq_online)) - return false; - - if (start_flush_work(work, &barr)) { - wait_for_completion(&barr.done); - destroy_work_on_stack(&barr.work); - return true; - } else { - return false; - } + return __flush_work(work, false); } EXPORT_SYMBOL_GPL(flush_work); @@ -2986,7 +2993,7 @@ static bool __cancel_work_timer(struct w * isn't executing. */ if (wq_online) - flush_work(work); + __flush_work(work, true); clear_work_data(work);