Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753347AbbEKMXt (ORCPT ); Mon, 11 May 2015 08:23:49 -0400 Received: from mail-qk0-f171.google.com ([209.85.220.171]:36749 "EHLO mail-qk0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751592AbbEKMXq (ORCPT ); Mon, 11 May 2015 08:23:46 -0400 Date: Mon, 11 May 2015 08:23:42 -0400 From: Tejun Heo To: Lai Jiangshan Cc: linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/5] workqueue: wq_pool_mutex protects the attrs-installation Message-ID: <20150511122342.GB11388@htj.duckdns.org> References: <1431336953-3260-1-git-send-email-laijs@cn.fujitsu.com> <1431336953-3260-2-git-send-email-laijs@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1431336953-3260-2-git-send-email-laijs@cn.fujitsu.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2319 Lines: 82 On Mon, May 11, 2015 at 05:35:48PM +0800, Lai Jiangshan wrote: > @@ -127,6 +127,12 @@ enum { > * > * PR: wq_pool_mutex protected for writes. Sched-RCU protected for reads. > * > + * PW: wq_pool_mutex and wq->mutex protected for writes. Any one of them > + * protected for reads. Either for reads. > + * > + * PWR: wq_pool_mutex and wq->mutex protected for writes. Any one of them > + * or sched-RCU for reads. Ditto. > + * > * WQ: wq->mutex protected. > * > * WR: wq->mutex protected for writes. Sched-RCU protected for reads. ... > @@ -553,7 +565,7 @@ static int worker_pool_assign_id(struct worker_pool *pool) > * @wq: the target workqueue > * @node: the node ID > * > - * This must be called either with pwq_lock held or sched RCU read locked. > + * This must be called either with wq_pool_mutex held or sched RCU read locked. The comment was outdated before too but the updated one isn't correct either. > * If the pwq needs to be used beyond the locking in effect, the caller is > * responsible for guaranteeing that the pwq stays online. > * > @@ -562,7 +574,7 @@ static int worker_pool_assign_id(struct worker_pool *pool) > static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq, > int node) > { > - assert_rcu_or_wq_mutex(wq); > + assert_rcu_or_wq_mutex_or_pool_mutex(wq); > return rcu_dereference_raw(wq->numa_pwq_tbl[node]); > } > ... > @@ -3644,10 +3657,9 @@ int apply_workqueue_attrs(struct workqueue_struct *wq, > * pwqs accordingly. > */ > get_online_cpus(); > - > mutex_lock(&wq_pool_mutex); > + > ctx = apply_wqattrs_prepare(wq, attrs); > - mutex_unlock(&wq_pool_mutex); > > /* the ctx has been prepared successfully, let's commit it */ > if (ctx) { > @@ -3655,10 +3667,11 @@ int apply_workqueue_attrs(struct workqueue_struct *wq, > ret = 0; > } > > - put_online_cpus(); > - > apply_wqattrs_cleanup(ctx); Why are we protecting cleanup? > + mutex_unlock(&wq_pool_mutex); > + put_online_cpus(); > + > return ret; > } > Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/