Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp101817pxb; Tue, 17 Nov 2020 22:29:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJzKtwSaRjVDQHVMrML+d8VQNNnP9dS8pSUjyA+Aeei8m9IQdUXJ/JGOq26BHlxmnNTnjtu2 X-Received: by 2002:aa7:db0c:: with SMTP id t12mr20421738eds.41.1605680946089; Tue, 17 Nov 2020 22:29:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605680946; cv=none; d=google.com; s=arc-20160816; b=r7f+KcMUJhXrpuPiDgk4SYkwR4Lh2OkcfR1iTT51B6k+p5rxmltn2lhiFHFG1Q3cSe ZLfVQnLckvPymsusmQ8czVRnNnINLtEZobr3SCEgt31VyuVBZqksR8JUqJE14Sd3i9M8 a73pkNhb+kBpHgSxgqLRYoqXCbG7HPoNIwhmkGijHV7z/Dl7fsCmWdXVwigNql3zIg/j KCnTGFL2GKXZ5yxQWUgvblxkNbBa7nNLt4dMPCos+7QMSKctqzexX/i7yd+b5Xorytpt 9+x/qfeHziXdcSdMNOQRiUGWoYVKfT55LEhKOrYPEtp21YAqFkpZrPwwh/Eb1DAdmzdK qcDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=/G3W/pZqKjYnWl0BdZiON9EROsTtXLxzt1gJ26cQ3w8=; b=wboUxVqt2VxNjKKTyNl1uvN/CjqUgFXAQ/6zoePGhp5CHkh/i6DzhVtpdHUVGpt9il zRJQoyq61vHx4N1ahxMksNyZLjpMg5E9ySULXXx5/A5ZvhmtHn8FDU3GClZ9/norSfY5 nYjo53xbw3Rv9eShC2zhELUhtn7hPm/xKDSZjyi3M1/RTdcDDGOGnl5JQBlvoXgAZpyS Yy4R0PhUyewsHNWQ2e039othZg4Ew5aqlXecPZFM1fFVUMa7uYPeu6Y0vEM2rCRJWG1L wSW2Kxz+2rgKKM+Ttd7PZHHiBajfGTXY2K+/zADJWEItS391MVT6rMTyi+Y1Rxjz+RAo T0Yw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h7si9367658edz.566.2020.11.17.22.28.42; Tue, 17 Nov 2020 22:29:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726296AbgKRG1P (ORCPT + 99 others); Wed, 18 Nov 2020 01:27:15 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:8106 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725355AbgKRG1O (ORCPT ); Wed, 18 Nov 2020 01:27:14 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4CbXs80gmZzLq99; Wed, 18 Nov 2020 14:26:52 +0800 (CST) Received: from [10.174.176.199] (10.174.176.199) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Wed, 18 Nov 2020 14:27:01 +0800 Subject: Re: workqueue: Only kick a worker after thawed or for an unbound workqueue To: Lai Jiangshan CC: Tejun Heo , LKML , Shiyuan Hu , Hewenliang References: From: Yunfeng Ye Message-ID: <6e174c9f-5436-7d1c-0443-3ca21ff8dad7@huawei.com> Date: Wed, 18 Nov 2020 14:26:49 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.176.199] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/11/18 12:06, Lai Jiangshan wrote: > On Tue, Nov 17, 2020 at 3:33 PM Yunfeng Ye wrote: >> >> In realtime scenario, We do not want to have interference on the >> isolated cpu cores. but when invoking alloc_workqueue() for percpu wq >> on the housekeeping cpu, it kick a kworker on the isolated cpu. >> >> alloc_workqueue >> pwq_adjust_max_active >> wake_up_worker >> >> The comment in pwq_adjust_max_active() said: >> "Need to kick a worker after thawed or an unbound wq's >> max_active is bumped" >> >> So it is unnecessary to kick a kworker for percpu wq's when >> alloc_workqueue. this patch only kick a worker after thawed or for an >> unbound workqueue. >> >> Signed-off-by: Yunfeng Ye >> --- >> kernel/workqueue.c | 18 +++++++++++++----- >> 1 file changed, 13 insertions(+), 5 deletions(-) >> >> diff --git a/kernel/workqueue.c b/kernel/workqueue.c >> index c41c3c17b86a..80f7bbd4889f 100644 >> --- a/kernel/workqueue.c >> +++ b/kernel/workqueue.c >> @@ -3696,14 +3696,16 @@ static void pwq_unbound_release_workfn(struct work_struct *work) >> } >> >> /** >> - * pwq_adjust_max_active - update a pwq's max_active to the current setting >> + * pwq_adjust_max_active_and_kick - update a pwq's max_active to the current setting >> * @pwq: target pool_workqueue >> + * @force_kick: force to kick a worker >> * >> * If @pwq isn't freezing, set @pwq->max_active to the associated >> * workqueue's saved_max_active and activate delayed work items >> * accordingly. If @pwq is freezing, clear @pwq->max_active to zero. >> */ >> -static void pwq_adjust_max_active(struct pool_workqueue *pwq) >> +static void pwq_adjust_max_active_and_kick(struct pool_workqueue *pwq, >> + bool force_kick) >> { >> struct workqueue_struct *wq = pwq->wq; >> bool freezable = wq->flags & WQ_FREEZABLE; >> @@ -3733,9 +3735,10 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) >> >> /* >> * Need to kick a worker after thawed or an unbound wq's >> - * max_active is bumped. It's a slow path. Do it always. >> + * max_active is bumped. > > > Hello > > Thanks for reporting the problem. > > But I don't like to add an argument. The waking up is called > always just because it was considered no harm and it is slow > path. But it can still be possible to detect if the waking up > is really needed based on the actual activation of delayed works. > > The previous lines are: > > while (!list_empty(&pwq->delayed_works) && > pwq->nr_active < pwq->max_active) > pwq_activate_first_delayed(pwq); > > And you can record the old pwq->nr_active before these lines: > > int old_nr_active = pwq->nr_active; > > while (!list_empty(&pwq->delayed_works) && > pwq->nr_active < pwq->max_active) > pwq_activate_first_delayed(pwq); > > /* please add more comments here, see 951a078a5 */ > if (old_nr_active < pwq->nr_active) { > if (!old_nr_active || (wq->flags & WQ_UNBOUND)) > wake_up_worker(pwq->pool); > } > Ok, I will send a patch v2. Thanks. > > Thanks for your work. > Lai. > >> */ >> - wake_up_worker(pwq->pool); >> + if (force_kick || (wq->flags & WQ_UNBOUND)) >> + wake_up_worker(pwq->pool); >> } else { >> pwq->max_active = 0; >> } >> @@ -3743,6 +3746,11 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) >> raw_spin_unlock_irqrestore(&pwq->pool->lock, flags); >> } >> >> +static void pwq_adjust_max_active(struct pool_workqueue *pwq) >> +{ >> + pwq_adjust_max_active_and_kick(pwq, false); >> +} >> + >> /* initialize newly alloced @pwq which is associated with @wq and @pool */ >> static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq, >> struct worker_pool *pool) >> @@ -5252,7 +5260,7 @@ void thaw_workqueues(void) >> list_for_each_entry(wq, &workqueues, list) { >> mutex_lock(&wq->mutex); >> for_each_pwq(pwq, wq) >> - pwq_adjust_max_active(pwq); >> + pwq_adjust_max_active_and_kick(pwq, true); >> mutex_unlock(&wq->mutex); >> } >> >> -- >> 2.18.4 > . >