Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2577631rwd; Fri, 2 Jun 2023 11:19:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ58TiAtSE99AXmhe1mLCw0r39w5F+y23zDCTdnYdeOBdb+ggjt8XVMxHGRESVt+kvyFENTJ X-Received: by 2002:a05:6a20:4284:b0:110:9b0b:71b6 with SMTP id o4-20020a056a20428400b001109b0b71b6mr16387430pzj.37.1685729940661; Fri, 02 Jun 2023 11:19:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685729940; cv=none; d=google.com; s=arc-20160816; b=OlMAxUMYXIoMCwF+LiZQtjajOEo7Qpmfg/jefGYpYj73vW7X1ciKLZ1C/YyqA/cm+6 TTOBpORugkDHfOZJER1YDryNaU1sVeIE2p8jmmhOTNvx6B1tu+VGTC9EVc/ahGa5ruwj OPUCpCJWzM3k9QL96xQcUp7mw/+rkR56kDVpI/9wxwxhk5+Z/P4X23s0E4X9T049Mt7W jvE3YnirWkfMr9nTNZQv7AqSR+wF+faG1klZRwwgAsPecG3p+u7mieqxrX97aQIENxDT witTi8MWjpPZ9Kq49v8zRJKjqZi82inzebpAkwFEwK164jIdMDzuDoJAZlwm59kuoJda fB0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=1FZZS690xjfMOFr+rvSD+7ZzRxDt4EoNBI4Y40OmjzE=; b=iCt3+1HaPOC6wvUSjRTl6h+/4+73llg4l8EZYWjM3GgH4kpO6cMVSo92+VCTwilE1z ekIok9R4yOJzwV0kPxma/sbMgl54AzGnVZ5eXziKgUMrRx2jhmqWMXDnPjDZXS+y9HvM iSMJvOMT5RHs614SdcMc1cn0aUbz+eNI8erGsy76C2RXzOGlBLyttfTK54BWM/maE9pS lT73mua58aMQ8eflQzKlFHqQ+WdtXHT3BAG7qGo37YxrTpqqeHQjbJJD2yADy3vwLJeM 5PYCD3UDGBRq5Tu3z872VU0LcvNO+kNad0LpQUMO9nSypYAmqUIk6kcVVekLuOSVG/aK DWkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gqgFF3ak; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h16-20020a633850000000b0053b8570baafsi1337607pgn.444.2023.06.02.11.18.45; Fri, 02 Jun 2023 11:19:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gqgFF3ak; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236898AbjFBRzI (ORCPT + 99 others); Fri, 2 Jun 2023 13:55:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236593AbjFBRzG (ORCPT ); Fri, 2 Jun 2023 13:55:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59457133 for ; Fri, 2 Jun 2023 10:54:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685728458; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=1FZZS690xjfMOFr+rvSD+7ZzRxDt4EoNBI4Y40OmjzE=; b=gqgFF3akBV/5bjlmtYUE8X5WLlgSS29JMVEd7MM6ldgvIsveFFciwaqr+znb8PX1qoYWAG NhHA4ktcbpu3JA4AowJIqdrVyMDczlaMrO2UJ/L06zgVBt7jKsI0Xs1JnilXwm7cU4lRbm nNnNkeW5dr5Gg5bMUsF99NDapaMyGBI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-295-Xud-YHbHOzmT9w4kHBcvZA-1; Fri, 02 Jun 2023 13:54:13 -0400 X-MC-Unique: Xud-YHbHOzmT9w4kHBcvZA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CCDCD185A78B; Fri, 2 Jun 2023 17:54:12 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7E71C2166B25; Fri, 2 Jun 2023 17:54:12 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 0D5BC4017883E; Fri, 2 Jun 2023 14:04:28 -0300 (-03) Date: Fri, 2 Jun 2023 14:04:28 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: Christoph Lameter , Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Tejun Heo , Lai Jiangshan Subject: Re: [PATCH 3/4] workqueue: add schedule_on_each_cpumask helper Message-ID: References: <20230530145234.968927611@redhat.com> <20230530145335.930262644@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 02, 2023 at 12:48:23PM +0200, Michal Hocko wrote: > You should be CCing WQ maintainers on changes like this one (now added). > > On Tue 30-05-23 11:52:37, Marcelo Tosatti wrote: > > Add a schedule_on_each_cpumask function, equivalent to > > schedule_on_each_cpu but accepting a cpumask to operate. > > IMHO it is preferable to add a new function along with its user so that > the usecase is more clear. > > > Signed-off-by: Marcelo Tosatti > > > > --- > > > > Index: linux-vmstat-remote/kernel/workqueue.c > > =================================================================== > > --- linux-vmstat-remote.orig/kernel/workqueue.c > > +++ linux-vmstat-remote/kernel/workqueue.c > > @@ -3455,6 +3455,56 @@ int schedule_on_each_cpu(work_func_t fun > > return 0; > > } > > > > + > > +/** > > + * schedule_on_each_cpumask - execute a function synchronously on each > > + * CPU in "cpumask", for those which are online. > > + * > > + * @func: the function to call > > + * @mask: the CPUs which to call function on > > + * > > + * schedule_on_each_cpu() executes @func on each specified CPU that is online, > > + * using the system workqueue and blocks until all such CPUs have completed. > > + * schedule_on_each_cpu() is very slow. > > + * > > + * Return: > > + * 0 on success, -errno on failure. > > + */ > > +int schedule_on_each_cpumask(work_func_t func, cpumask_t *cpumask) > > +{ > > + int cpu; > > + struct work_struct __percpu *works; > > + cpumask_var_t effmask; > > + > > + works = alloc_percpu(struct work_struct); > > + if (!works) > > + return -ENOMEM; > > + > > + if (!alloc_cpumask_var(&effmask, GFP_KERNEL)) { > > + free_percpu(works); > > + return -ENOMEM; > > + } > > + > > + cpumask_and(effmask, cpumask, cpu_online_mask); > > + > > + cpus_read_lock(); > > + > > + for_each_cpu(cpu, effmask) { > > Is the cpu_online_mask dance really necessary? > Why cannot you simply do for_each_online_cpu here? Are you suggesting to do: for_each_online_cpu(cpu) { if cpu is not in cpumask continue; ... } This does not seem efficient. > flush_work on unqueued work item should just > return, no? Apparently not: commit 0e8d6a9336b487a1dd6f1991ff376e669d4c87c6 Author: Thomas Gleixner Date: Wed Apr 12 22:07:28 2017 +0200 workqueue: Provide work_on_cpu_safe() work_on_cpu() is not protected against CPU hotplug. For code which requires to be either executed on an online CPU or to fail if the CPU is not available the callsite would have to protect against CPU hotplug. Provide a function which does get/put_online_cpus() around the call to work_on_cpu() and fails the call with -ENODEV if the target CPU is not online. > Also there is no synchronization with the cpu hotplug so cpu_online_mask > can change under your feet so this construct seem unsafe to me. Yes, fixed by patch in response to Andrew's comment.