Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757163Ab2ECO5Q (ORCPT ); Thu, 3 May 2012 10:57:16 -0400 Received: from mail-we0-f174.google.com ([74.125.82.174]:57550 "EHLO mail-we0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756967Ab2ECO5N (ORCPT ); Thu, 3 May 2012 10:57:13 -0400 From: Gilad Ben-Yossef To: linux-kernel@vger.kernel.org Cc: Gilad Ben-Yossef , Thomas Gleixner , Tejun Heo , John Stultz , Andrew Morton , KOSAKI Motohiro , Mel Gorman , Mike Frysinger , David Rientjes , Hugh Dickins , Minchan Kim , Konstantin Khlebnikov , Christoph Lameter , Chris Metcalf , Hakan Akkan , Max Krasnyansky , Frederic Weisbecker , linux-mm@kvack.org Subject: [PATCH v1 2/6] workqueue: introduce schedule_on_each_cpu_mask Date: Thu, 3 May 2012 17:55:58 +0300 Message-Id: <1336056962-10465-3-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1336056962-10465-1-git-send-email-gilad@benyossef.com> References: <1336056962-10465-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4476 Lines: 131 Introduce schedule_on_each_cpu_mask function to schedule a work item on each online CPU which is included in the mask provided. Then re-implement schedule_on_each_cpu on top of the new function. This function should be prefered to schedule_on_each_cpu in any case where some of the CPUs, especially on a big multi-core system, might not have actual work to perform in order to save needless wakeups and schedules. Signed-off-by: Gilad Ben-Yossef CC: Thomas Gleixner CC: Tejun Heo CC: John Stultz CC: Andrew Morton CC: KOSAKI Motohiro CC: Mel Gorman CC: Mike Frysinger CC: David Rientjes CC: Hugh Dickins CC: Minchan Kim CC: Konstantin Khlebnikov CC: Christoph Lameter CC: Chris Metcalf CC: Hakan Akkan CC: Max Krasnyansky CC: Frederic Weisbecker CC: linux-kernel@vger.kernel.org CC: linux-mm@kvack.org --- include/linux/workqueue.h | 2 ++ kernel/workqueue.c | 36 ++++++++++++++++++++++++++++-------- 2 files changed, 30 insertions(+), 8 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index af15545..20da95a 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -383,6 +383,8 @@ extern int schedule_delayed_work(struct delayed_work *work, unsigned long delay) extern int schedule_delayed_work_on(int cpu, struct delayed_work *work, unsigned long delay); extern int schedule_on_each_cpu(work_func_t func); +extern int schedule_on_each_cpu_mask(work_func_t func, + const struct cpumask *mask); extern int keventd_up(void); int execute_in_process_context(work_func_t fn, struct execute_work *); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 5abf42f..1c9782b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2787,43 +2787,63 @@ int schedule_delayed_work_on(int cpu, EXPORT_SYMBOL(schedule_delayed_work_on); /** - * schedule_on_each_cpu - execute a function synchronously on each online CPU + * schedule_on_each_cpu_mask - execute a function synchronously on each + * online CPU which is specified in the supplied cpumask * @func: the function to call + * @mask: the cpu mask * - * schedule_on_each_cpu() executes @func on each online CPU using the - * system workqueue and blocks until all CPUs have completed. - * schedule_on_each_cpu() is very slow. + * schedule_on_each_cpu_mask() executes @func on each online CPU which + * is part of the @mask using the * system workqueue and blocks until + * all CPUs have completed + * schedule_on_each_cpu_mask() is very slow. * * RETURNS: * 0 on success, -errno on failure. */ -int schedule_on_each_cpu(work_func_t func) +int schedule_on_each_cpu_mask(work_func_t func, const struct cpumask *mask) { int cpu; struct work_struct __percpu *works; works = alloc_percpu(struct work_struct); - if (!works) + if (unlikely(!works)) return -ENOMEM; get_online_cpus(); - for_each_online_cpu(cpu) { + for_each_cpu_and(cpu, mask, cpu_online_mask) { struct work_struct *work = per_cpu_ptr(works, cpu); INIT_WORK(work, func); schedule_work_on(cpu, work); } - for_each_online_cpu(cpu) + for_each_cpu_and(cpu, mask, cpu_online_mask) flush_work(per_cpu_ptr(works, cpu)); put_online_cpus(); free_percpu(works); + return 0; } /** + * schedule_on_each_cpu - execute a function synchronously on each online CPU + * @func: the function to call + * + * schedule_on_each_cpu() executes @func on each online CPU using the + * system workqueue and blocks until all CPUs have completed. + * schedule_on_each_cpu() is very slow. + * + * RETURNS: + * 0 on success, -errno on failure. + */ +int schedule_on_each_cpu(work_func_t func) +{ + return schedule_on_each_cpu_mask(func, cpu_online_mask); +} + +/** * flush_scheduled_work - ensure that any scheduled work has run to completion. * * Forces execution of the kernel-global workqueue and blocks until its -- 1.7.0.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/