Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753233AbdIGAd0 (ORCPT ); Wed, 6 Sep 2017 20:33:26 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:38583 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751267AbdIGAdZ (ORCPT ); Wed, 6 Sep 2017 20:33:25 -0400 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: byungchul.park@lge.com X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com From: Byungchul Park To: tj@kernel.org, johannes.berg@intel.com, peterz@infradead.org, mingo@kernel.org Cc: tglx@linutronix.de, oleg@redhat.com, david@fromorbit.com, linux-kernel@vger.kernel.org, kernel-team@lge.com Subject: [PATCH] lockdep: Remove unnecessary acquisitions wrt workqueue flush Date: Thu, 7 Sep 2017 09:33:16 +0900 Message-Id: <1504744396-4182-1-git-send-email-byungchul.park@lge.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2771 Lines: 81 Workqueue added manual acquisitions to catch deadlock cases. Now crossrelease was introduced, some of those are redundant because crossrelease-enabled wait_for_completeion() also does it. Removed it. Signed-off-by: Byungchul Park --- include/linux/workqueue.h | 4 ++-- kernel/workqueue.c | 16 +++++++--------- 2 files changed, 9 insertions(+), 11 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index db6dc9d..1bef13e 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -218,7 +218,7 @@ static inline void destroy_delayed_work_on_stack(struct delayed_work *work) { } \ __init_work((_work), _onstack); \ (_work)->data = (atomic_long_t) WORK_DATA_INIT(); \ - lockdep_init_map(&(_work)->lockdep_map, #_work, &__key, 0); \ + lockdep_init_map(&(_work)->lockdep_map, "(complete)"#_work, &__key, 0); \ INIT_LIST_HEAD(&(_work)->entry); \ (_work)->func = (_func); \ } while (0) @@ -398,7 +398,7 @@ enum { static struct lock_class_key __key; \ const char *__lock_name; \ \ - __lock_name = #fmt#args; \ + __lock_name = "(complete)"#fmt#args; \ \ __alloc_workqueue_key((fmt), (flags), (max_active), \ &__key, __lock_name, ##args); \ diff --git a/kernel/workqueue.c b/kernel/workqueue.c index ab3c0dc..ea4e9b6 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2503,8 +2503,8 @@ static void insert_wq_barrier(struct pool_workqueue *pwq, * build a dependency between wq_barrier::done and unrelated work. */ lockdep_init_map_crosslock((struct lockdep_map *)&barr->done.map, - "(complete)wq_barr::done", - target->lockdep_map.key, 1); + target->lockdep_map.name, + target->lockdep_map.key, 0); __init_completion(&barr->done); barr->task = current; @@ -2611,16 +2611,17 @@ void flush_workqueue(struct workqueue_struct *wq) struct wq_flusher this_flusher = { .list = LIST_HEAD_INIT(this_flusher.list), .flush_color = -1, - .done = COMPLETION_INITIALIZER_ONSTACK(this_flusher.done), }; int next_color; + lockdep_init_map_crosslock((struct lockdep_map *)&this_flusher.done.map, + wq->lockdep_map.name, + wq->lockdep_map.key, 0); + __init_completion(&this_flusher.done); + if (WARN_ON(!wq_online)) return; - lock_map_acquire(&wq->lockdep_map); - lock_map_release(&wq->lockdep_map); - mutex_lock(&wq->mutex); /* @@ -2883,9 +2884,6 @@ bool flush_work(struct work_struct *work) if (WARN_ON(!wq_online)) return false; - lock_map_acquire(&work->lockdep_map); - lock_map_release(&work->lockdep_map); - if (start_flush_work(work, &barr)) { wait_for_completion(&barr.done); destroy_work_on_stack(&barr.work); -- 1.9.1