Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751701AbbEKOb7 (ORCPT ); Mon, 11 May 2015 10:31:59 -0400 Received: from mail-qc0-f175.google.com ([209.85.216.175]:33411 "EHLO mail-qc0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750880AbbEKOby (ORCPT ); Mon, 11 May 2015 10:31:54 -0400 Date: Mon, 11 May 2015 10:31:50 -0400 From: Tejun Heo To: Lai Jiangshan Cc: linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/5] workqueue: merge the similar code Message-ID: <20150511143150.GC11388@htj.duckdns.org> References: <1431336953-3260-1-git-send-email-laijs@cn.fujitsu.com> <1431336953-3260-3-git-send-email-laijs@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1431336953-3260-3-git-send-email-laijs@cn.fujitsu.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4524 Lines: 129 Hello, Lai. On Mon, May 11, 2015 at 05:35:49PM +0800, Lai Jiangshan wrote: > @@ -3440,48 +3440,86 @@ static struct pool_workqueue *alloc_unbound_pwq(struct workqueue_struct *wq, > } > > /** > - * wq_calc_node_mask - calculate a wq_attrs' cpumask for the specified node > - * @attrs: the wq_attrs of the default pwq of the target workqueue > + * alloc_node_unbound_pwq - allocate a pwq for specified node for the specified node > + * @wq: the target workqueue > + * @dfl_pwq: the allocated default pwq > + * @numa: NUMA affinity > * @node: the target NUMA node > - * @cpu_going_down: if >= 0, the CPU to consider as offline > - * @cpumask: outarg, the resulting cpumask > + * @cpu_off: if >= 0, the CPU to consider as offline @cpu_off sounds like offset into cpu array or sth. Is there a reason to change the name? > + * @use_dfl_when_fail: use the @dfl_pwq in case the normal allocation failed @use_dfl_on_fail > + * > + * Allocate or reuse a pwq with the cpumask that @wq should use on @node. I wonder whether a better name for the function would be sth like get_alloc_node_unbound_pwq(). > * > - * Calculate the cpumask a workqueue with @attrs should use on @node. If > - * @cpu_going_down is >= 0, that cpu is considered offline during > - * calculation. The result is stored in @cpumask. > + * If NUMA affinity is not enabled, @dfl_pwq is always used. @dfl_pwq > + * was allocated with the effetive attrs saved in @dfl_pwq->pool->attrs. I'm not sure we need the second sentence. ... > - * Return: %true if the resulting @cpumask is different from @attrs->cpumask, > - * %false if equal. > + * Return: valid pwq, it might be @dfl_pwq under some conditions > + * or might be the old pwq of the @node. > + * NULL, when the normal allocation failed. Maybe explain how @use_dfl_on_fail affects the result? > */ > -static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node, > - int cpu_going_down, cpumask_t *cpumask) > +static struct pool_workqueue * > +alloc_node_unbound_pwq(struct workqueue_struct *wq, > + struct pool_workqueue *dfl_pwq, bool numa, > + int node, int cpu_off, bool use_dfl_when_fail) > + > { > - if (!wq_numa_enabled || attrs->no_numa) > + struct pool_workqueue *pwq = unbound_pwq_by_node(wq, node); > + struct workqueue_attrs *attrs = dfl_pwq->pool->attrs, *tmp_attrs; > + cpumask_t *cpumask; > + > + lockdep_assert_held(&wq_pool_mutex); > + > + if (!wq_numa_enabled || !numa) > goto use_dfl; > > + /* > + * We don't wanna alloc/free wq_attrs for each call. Let's use a > + * preallocated one. It is protected by wq_pool_mutex. > + * tmp_attrs->cpumask will be updated in next and tmp_attrs->no_numa will be updated below > + * is not used, so we just need to initialize tmp_attrs->nice; > + */ > + tmp_attrs = wq_update_unbound_numa_attrs_buf; > + tmp_attrs->nice = attrs->nice; > + cpumask = tmp_attrs->cpumask; > + > /* does @node have any online CPUs @attrs wants? */ > cpumask_and(cpumask, cpumask_of_node(node), attrs->cpumask); > - if (cpu_going_down >= 0) > - cpumask_clear_cpu(cpu_going_down, cpumask); > - > + if (cpu_off >= 0) > + cpumask_clear_cpu(cpu_off, cpumask); > if (cpumask_empty(cpumask)) > goto use_dfl; > > - /* yeap, return possible CPUs in @node that @attrs wants */ > + /* yeap, use possible CPUs in @node that @attrs wants */ > cpumask_and(cpumask, attrs->cpumask, wq_numa_possible_cpumask[node]); > - return !cpumask_equal(cpumask, attrs->cpumask); > + if (cpumask_equal(cpumask, attrs->cpumask)) > + goto use_dfl; > + if (pwq && wqattrs_equal(tmp_attrs, pwq->pool->attrs)) > + goto use_existed; goto use_current; Also, would it be difficult to put this in a separate patch? This is mixing code refactoring with behavior change. Make both code paths behave the same way first and then refactor? > + > + /* create a new pwq */ > + pwq = alloc_unbound_pwq(wq, tmp_attrs); > + if (!pwq && use_dfl_when_fail) { > + pr_warn("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n", > + wq->name); > + goto use_dfl; Does this need to be in this function? Can't we let the caller handle the fallback instead? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/