Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752729AbbEKOzJ (ORCPT ); Mon, 11 May 2015 10:55:09 -0400 Received: from mail-qc0-f177.google.com ([209.85.216.177]:33083 "EHLO mail-qc0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751258AbbEKOzG (ORCPT ); Mon, 11 May 2015 10:55:06 -0400 Date: Mon, 11 May 2015 10:55:02 -0400 From: Tejun Heo To: Lai Jiangshan Cc: linux-kernel@vger.kernel.org Subject: Re: [PATCH 3/5] workqueue: ensure attrs-changing be sequentially Message-ID: <20150511145502.GD11388@htj.duckdns.org> References: <1431336953-3260-1-git-send-email-laijs@cn.fujitsu.com> <1431336953-3260-4-git-send-email-laijs@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1431336953-3260-4-git-send-email-laijs@cn.fujitsu.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1754 Lines: 67 Hey, Prolly a better subject is "ensure attrs changes are properly synchronized" On Mon, May 11, 2015 at 05:35:50PM +0800, Lai Jiangshan wrote: > Current modification to attrs via sysfs is not atomically. atomic. > > Process A (change cpumask) | Process B (change numa affinity) > wq_cpumask_store() | > wq_sysfs_prep_attrs() | ^ misaligned > | apply_workqueue_attrs() > apply_workqueue_attrs() | > > It results that the Process B's operation is totally reverted > without any notification. Yeah, right. > This behavior is acceptable but it is sometimes unexpected. I don't think this is an acceptable behavior. > Sequential model on non-performance-sensitive operations is more popular > and preferred. So this patch moves wq_sysfs_prep_attrs() into the protection You can just say the previous behavior is buggy. > under wq_pool_mutex to ensure attrs-changing be sequentially. > > This patch is also a preparation patch for next patch which change > the API of apply_workqueue_attrs(). ... > +static void apply_wqattrs_lock(void) > +{ > + /* > + * CPUs should stay stable across pwq creations and installations. > + * Pin CPUs, determine the target cpumask for each node and create > + * pwqs accordingly. > + */ > + get_online_cpus(); > + mutex_lock(&wq_pool_mutex); > +} > + > +static void apply_wqattrs_unlock(void) > +{ > + mutex_unlock(&wq_pool_mutex); > + put_online_cpus(); > +} Separate out refactoring and extending locking coverage? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/