Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp5013081rwd; Tue, 30 May 2023 13:15:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ45tLLVYheDpGEJcOx7wSLkTqyS3LsWAHxexf5XY4yCI/9U6a62FtLNuTHuQpEXo3CwxP68 X-Received: by 2002:a05:6a00:1992:b0:64d:284e:f309 with SMTP id d18-20020a056a00199200b0064d284ef309mr4458306pfl.8.1685477729905; Tue, 30 May 2023 13:15:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685477729; cv=none; d=google.com; s=arc-20160816; b=K2bD6RjMHCC4V5BV3Wg2MyVQEemkTi+TzAZZed2+vSBI2ER+qEaKFjg0/j/fWRxQ9z AMQ3OWzap+SQWDMK83TEr1YVMGUM5f26xBJfPSTZiveFF4aOFUWq1CZWgJ2hXC4mkntW KLlljKELqkEw5tX5a+Qj1kvedgpObZrSH1b9iVA+LYapN79k7na/CHJ7FdrQY1OQoraR QWNlPLw7eIRg7u4oATwsognJh0fy8ALXcLO9amyjEHYJWU9aYEcjvGxjgJHIbHH8A7t9 84z73Z9+kJPBtiU+KpYTq+6NnF5wO1zld6orWqlAEzuoeEfdfyU5qKGgKAt0cTWA8NrS qMGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=Ymg+rhuQWdYlLAxbOtVa6W0KWzYQZv3yxNUCAEGOH6A=; b=ictowp7QbjMBgxSzYRIku5go9uXFk4apmlVl0ngianUNnhSQfRChsiwdm0NsLE+jcl GgdBoAq4rboTwbsVJoVLpiKwSTZMGuS62apZV/j1cZHW1o6AM2DJH4rYECaU2KEO2p0C X7ub9U4oSgGZV9o5CzDwxEvCyLNEKcSoXUo5ayamDP43Kq6k8AV7/XHog6n6mQWAEM5k Ar/KUW4p0K8mS3J+f2dSpKXnR0n+RK3Rcrw0ilhe9yoZFw2VN1xeqJmqJfO0t3gK4LFL SOBcSOIeHJCjztf2QTMuAzf44xokw2ToW2iBQZFLwuMbivvZ90gjg541PpczqH/EQGKq ZNZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=YthwCQBj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 20-20020a631854000000b0053ef518d7ffsi11457526pgy.303.2023.05.30.13.15.18; Tue, 30 May 2023 13:15:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=YthwCQBj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232270AbjE3UKE (ORCPT + 99 others); Tue, 30 May 2023 16:10:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230433AbjE3UJu (ORCPT ); Tue, 30 May 2023 16:09:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 965369D for ; Tue, 30 May 2023 13:09:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 30E2C62EFF for ; Tue, 30 May 2023 20:09:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4BCA8C433EF; Tue, 30 May 2023 20:09:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1685477388; bh=EqzuJeBD8W0a4J7cFQ8JgaCPj41wztauFHAzGU/TFJY=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=YthwCQBjp/rV4SJKc7dML2Um491iha4xNQT6UYafxLvOm+RrmymkwrFWJEa+B50Vs rUZ9fGv9W1yznyK08EWeEeJoc7o0CcuT7C175z0QD1oRnaNYL19RX5lMD8yPOkBNZi 4nUY80Bl9bTrvTwbLIom3Piidx4BN5JRVLiYhBKo= Date: Tue, 30 May 2023 13:09:47 -0700 From: Andrew Morton To: Marcelo Tosatti Cc: Christoph Lameter , Aaron Tomlin , Frederic Weisbecker , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Michal Hocko Subject: Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Message-Id: <20230530130947.37edbab6b672bfce6f481295@linux-foundation.org> In-Reply-To: <20230530145335.930262644@redhat.com> References: <20230530145234.968927611@redhat.com> <20230530145335.930262644@redhat.com> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 30 May 2023 11:52:37 -0300 Marcelo Tosatti wrote: > Add a schedule_on_each_cpumask function, equivalent to > schedule_on_each_cpu but accepting a cpumask to operate. > > Signed-off-by: Marcelo Tosatti > > --- > > Index: linux-vmstat-remote/kernel/workqueue.c > =================================================================== > --- linux-vmstat-remote.orig/kernel/workqueue.c > +++ linux-vmstat-remote/kernel/workqueue.c > @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun > return 0; > } > > + > +/** > + * schedule_on_each_cpumask - execute a function synchronously on each > + * CPU in "cpumask", for those which are online. > + * > + * @func: the function to call > + * @mask: the CPUs which to call function on > + * > + * schedule_on_each_cpu() executes @func on each specified CPU that is online, > + * using the system workqueue and blocks until all such CPUs have completed. > + * schedule_on_each_cpu() is very slow. > + * > + * Return: > + * 0 on success, -errno on failure. > + */ > +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask) > +{ > + int cpu; > + struct work_struct __percpu *works; > + cpumask_var_t effmask; > + > + works = alloc_percpu(struct work_struct); > + if (!works) > + return -ENOMEM; > + > + if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) { > + free_percpu(works); > + return -ENOMEM; > + } > + > + cpumask_and(effmask, cpumask, cpu_online_mask); > + > + cpus_read_lock(); > + > + for_each_cpu(cpu, effmask) { Should we check here that the cpu is still online? > + struct work_struct *work = per_cpu_ptr(works, cpu); > + > + INIT_WORK(work, func); > + schedule_work_on(cpu, work); > + } > + > + for_each_cpu(cpu, effmask) > + flush_work(per_cpu_ptr(works, cpu)); > + > + cpus_read_unlock(); > + free_percpu(works); > + free_cpumask_var(effmask); > + return 0; > +} > + > /** > * execute_in_process_context - reliably execute the routine with user context > * @fn: the function to execute > --- linux-vmstat-remote.orig/include/linux/workqueue.h > +++ linux-vmstat-remote/include/linux/workqueue.h > @@ -450,6 +450,7 @@ extern void __flush_workqueue(struct wor > extern void drain_workqueue(struct workqueue_struct *wq); > > extern int schedule_on_each_cpu(work_func_t func); > +extern int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask); May as well make schedule_on_each_cpu() call schedule_on_each_cpumask()? Save a bit of text, and they're hardly performance-critical to that extent.