Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1658503rwd; Thu, 8 Jun 2023 23:38:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5cPW202nNoxmnRJrOzM2dE8BNbTPGNaC3ICe23zcSYT9JwzMNNiMFbYrcXeVYG3uzXzOaK X-Received: by 2002:a05:6a20:918a:b0:102:a593:a17c with SMTP id v10-20020a056a20918a00b00102a593a17cmr4631208pzd.0.1686292727709; Thu, 08 Jun 2023 23:38:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686292727; cv=none; d=google.com; s=arc-20160816; b=Vxzmab71CAMMTnNfHqP+X4jNpdbyPjgKwvnBIvMveAMRpQW8kcXReSis4amS8usY4f kAx49CirQobo/A0hYcnQwO/WeATEkWvVppPnJSOPQx80zN3Ho/DUqrYYZ8rm+G7B4EVv u6cZuHRV4OCohFBV86PnDc5kYI5BZM6NVyecx/Jvk7JQgtF7UKj7JISWNXW62EtweWmP VXSt1VoJQ3EzVr+P7isEUROilCbisHSO5uuSS+rroNSyg7T5yryh/R2yY9FdSaQpOpaD 9hFm98r+RBUIYQ76RkEArN4p/zBvm1taSlL8PkQpbNbIlSKB+iOJta/b2VcdJOg2CSIB nCzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=HDfpwEbtdkwWsjyogcTfCoQzLF0dQnPR2AN0SNtNzSA=; b=R0YoBzlTNrswI6sdEvXjQehmm6MyRHNlGZFGeypUsrnvH6v6F2Go6kradd3wgUsTPI +7vZSdibKQMpTLb2jQJRnsKdUiMo1i7SgUbo4Xj1p+1h0BYx3XDlrrySqmj8q7X5pCqM SLDn3IcMzDbeBO6OtmDj2fZ6vPlKkJrZVRhmXwkDEJLiTd2x3jIkxjxBKVcwSWXxK9Gu Y0P+wPFGR8POQs725MmZ3pG3ySp6guaoma3VNfkM0uS1FXl0zhIEiTwCVO0CX9TOWh6s K2bjY/WYN5VGkqPXPdIy3+ZNybKHYScVDiVQJGZj/c3RLmZ/Ou5fGRew6roC8ZuGfqcH X1iA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=gL+b3BWE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j1-20020a636e01000000b005347d35b573si2262764pgc.580.2023.06.08.23.38.35; Thu, 08 Jun 2023 23:38:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=gL+b3BWE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231298AbjFIGXu (ORCPT + 99 others); Fri, 9 Jun 2023 02:23:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238099AbjFIGXi (ORCPT ); Fri, 9 Jun 2023 02:23:38 -0400 Received: from mail-oo1-xc36.google.com (mail-oo1-xc36.google.com [IPv6:2607:f8b0:4864:20::c36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1073730FD for ; Thu, 8 Jun 2023 23:23:35 -0700 (PDT) Received: by mail-oo1-xc36.google.com with SMTP id 006d021491bc7-5523bd97c64so1616679eaf.0 for ; Thu, 08 Jun 2023 23:23:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686291815; x=1688883815; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=HDfpwEbtdkwWsjyogcTfCoQzLF0dQnPR2AN0SNtNzSA=; b=gL+b3BWErI97pjMfxFnrNvc5M9TCMvXlvxQWb2fuR/TEERVDKRfnYfN9O41OYnqh0Q CxttWnz2ZIrkZlJefPIU+tzsUQJ2LzdU3RG4wwP+KEuwJih7VvVBEyHPviURJ2BfsZ2x 4bOZZH9pRfO83VkCREuNaJ7+GxHJl7k6grz+imOut19Qv4UaNg5wQPe5g95Gkh4oU/15 0MnOJpwukK5kPhoqQy7TtVo4tJsI7n9rhn9vM7Ze9tQUw+7rTywwulpyrRPgsb7VE4z4 1VdAYzgB/rasZpPHA3pWD6y8gRBA7WPXrVf222JGkVsN9mSLWyMNbBBaGDa1JLNM5r66 w+WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686291815; x=1688883815; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HDfpwEbtdkwWsjyogcTfCoQzLF0dQnPR2AN0SNtNzSA=; b=OscgXhrSSzvsCPmxir4qobgZro83nVVEIYSvqUdhEvSurxD9KYYcod309KWaGKs4A3 IE3cgEMpN0yTRTuoYakMU74/xFWZYIVnXq5LlU5Pkfsw2eHTyL0ksyq7kbVt7Abh0Qe4 3s0BwO3AwT5LtrGPCQHxCFdxxONm4m0zv9fJCTikr4vh84ZhrmZP0y06pWWwAso28PB2 IoiSIav9xTMofgSIz0pyoyofswaxZHaYjbDrKm6qCESX2xQrKpdHgB+wbXtK7bAWoJb4 MmwENSv3+AQ4M4jvd46Iac0h8UpAntCTvdW8oli5SagCj4FD3/gKc13OKeQWADrCAFGH z34Q== X-Gm-Message-State: AC+VfDyZfXfFlp8B5IMjwDisj2jAzcAN363NV97Y4/j10MuCI81vU0ie uloRWnGo6UdxrE2E5WtCkODeRWdAZatvUakLh6feir7PPzJhWpMD X-Received: by 2002:aca:d843:0:b0:39a:3dbd:d26d with SMTP id p64-20020acad843000000b0039a3dbdd26dmr2139177oig.5.1686291814948; Thu, 08 Jun 2023 23:23:34 -0700 (PDT) MIME-Version: 1.0 References: <20230606093135.GA9077@didi-ThinkCentre-M930t-N000> In-Reply-To: From: Yuanhan Zhang Date: Fri, 9 Jun 2023 14:23:23 +0800 Message-ID: Subject: Re: [PATCH] workqueue: introduce queue_work_cpumask to queue work onto a given cpumask To: Tejun Heo Cc: jiangshanlai@gmail.com, linux-kernel@vger.kernel.org, pmladek@suse.com, zyhtheonly@yeah.net, zwp10758@gmail.com, tiozhang@didiglobal.com, fuyuanli@didiglobal.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org // I resend this to put it into the same thread, sorry for the confusion. > Can you elaborate the intended use cases? Hi Tejun, Thanks for your reply! Please let me use myself as an example to explain th= is. In my scenario, I have 7 cpus on my machine (actually it is uma, so queue_work_node or using UNBOUND do not works for me), and for some unlucky reasons there are always some irqs running on cpu 0 and cpu 6, since I'm using arm6= 4 with irqs tuning into FIFO threads, those threaded irqs are always running = on cpu 0 and 6 too (for affinity). And this would not be fixed easily in short terms :( So in order to help async init for better boot times for my devices, I'd like to prevent works from running on cpu 0 and 6. With queue_work_cpumask(), it would be s= imply done by: ... cpumask_clear_cpu(0, cpumask); // actually I use sysfs to parse my cpumask cpumask_clear_cpu(6, cpumask); queue_work_cpumask(cpumask, my_wq, &my_work->work); ... > The code seems duplicated too. Could you do a little refactoring and make > they (queue_work_cpumask() & queue_work_node()) share some code? Hi Lai, Thanks for your advice! I do the refactoring in PATCH v2, there are some changes: 1. removed WARN_ONCE in previous code 1). queue_work_node works well in UNBOUND since we have unbound_pwq_by_no= de() in __queue_work() to choose the right node. 2). queue_work_cpumask does not work in UNBOUND since list numa_pwq_tbl is designed to be per numa node. I comment on this in this patch. 2. remove the previous workqueue_select_cpu_near and let queue_work_node() = use queue_work_on() and queue_work_cpumask(). I test this patch with 100,000 queue_work_cpumask() & queue_work_node() with randomly inputs cpumask & node, it works as expected on my machines (80 cores x86_64 & 7 cores ARM64 & 16 cores ARM64). Please help review, thanks a lot! Thanks, Tio Zhang Yuanhan Zhang =E4=BA=8E2023=E5=B9=B46=E6=9C=889=E6= =97=A5=E5=91=A8=E4=BA=94 14:07=E5=86=99=E9=81=93=EF=BC=9A > > // I resend this to put it into the same thread, sorry for the confusion. > > > Can you elaborate the intended use cases? > > Hi Tejun, > > Thanks for your reply! Please let me use myself as an example to explain = this. > > In my scenario, I have 7 cpus on my machine (actually it is uma, so queue= _work_node > or using UNBOUND do not works for me), and for some unlucky reasons > there are always some irqs running on cpu 0 and cpu 6, since I'm using ar= m64 > with irqs tuning into FIFO threads, those threaded irqs are always runnin= g on > cpu 0 and 6 too (for affinity). And this would not be fixed easily in sho= rt terms :( > > So in order to help async init for better boot times for my devices, I'd = like to prevent > works from running on cpu 0 and 6. With queue_work_cpumask(), it would be= simply > done by: > > ... > cpumask_clear_cpu(0, cpumask); // actually I use sysfs to parse my cpuma= sk > cpumask_clear_cpu(6, cpumask); > queue_work_cpumask(cpumask, my_wq, &my_work->work); > ... > > > > The code seems duplicated too. Could you do a little refactoring and ma= ke > > they (queue_work_cpumask() & queue_work_node()) share some code? > > Hi Lai, > > Thanks for your advice! > > I do the refactoring in PATCH v2, there are some changes: > 1. removed WARN_ONCE in previous code > 1). queue_work_node works well in UNBOUND since we have unbound_pwq_by_= node() > in __queue_work() to choose the right node. > 2). queue_work_cpumask does not work in UNBOUND since list numa_pwq_tbl= is designed > to be per numa node. I comment on this in this patch. > 2. remove the previous workqueue_select_cpu_near and let queue_work_node(= ) use > queue_work_on() and queue_work_cpumask(). > > I test this patch with 100,000 queue_work_cpumask() & queue_work_node() w= ith randomly > inputs cpumask & node, it works as expected on my machines (80 cores x86_= 64 & 7 cores ARM64 > & 16 cores ARM64). > > Please help review, thanks a lot! > > Thanks, > Tio Zhang