Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp173197pxb; Wed, 18 Nov 2020 01:10:34 -0800 (PST) X-Google-Smtp-Source: ABdhPJwX6qPtFrIlS+sIwljoIHLB1UcxAd9sFvRWBSnJUW1LdRjEEoCmqe59DU0qYblwoZz0FAaQ X-Received: by 2002:a17:906:254d:: with SMTP id j13mr6285660ejb.376.1605690634027; Wed, 18 Nov 2020 01:10:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605690634; cv=none; d=google.com; s=arc-20160816; b=YkYY01q96dUmTeJNrBoE6GRxFiMA9siYQaHH32X0Rn62NC1WuWmvbAsftInFHGfAL+ U3in5EB+IDSE21pQ/bwNgSnkS+vCO2CXvUd68Vu6YNY1azEDlw/tmge2wQkRIGgtWGHf 3Xr+ITUTQGtIz9NlqVQ945r0DpkHLBYPnLrQi00aG8IFq1wiq/u/WbjDn5dxIkudp7Te N/OrifVCCgtXwuslEwy3oLkCbhqTkrQ6Ti/job92TiWgiVwrfUuESeNqgUFu0BUZmeMw /KNtuL5KG74wTaWGY1GZSAvXdHEWmv8U7vteDQnKcCEuPBnvXjPJRfECsIMQV/soBzCT bikQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:references:cc :to:from:subject; bh=hK9Snwem96jT5oxBGV4ptYMa3KupDmtNDzSThspKmwo=; b=t/cAvhXHQxJCjV13G5kwBbYjAbSwGa6TwQJHe6fUVJ59UHIEMzO99fH+7BBpmzNA4F 1e+kfgdfE+8BqtrqnlptSTpwFSGTSqzWSiC2dW815KhnH4v6bH78sg3rWvbX7LpkqxQH mM3GvUtvWvw8FZM/3FFtlvfJekBJn8NHyHgxwC5nXmJLJkJgzV+G2ZaeQwbnWW/orCWo 7izQb+ZNoU/Ey+LAnZUKoPWIomwV/wK7MZu4rkkckQLwwTldEcL4KlQV5RVJ/VGJ8zRz +2waU+i+hbRPGrxQp97m2e+A/zoIKW7mDD1s3amgvVc3J4CJMC/zUVhLyG61lVIJBl0I ejcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g24si3251310edj.168.2020.11.18.01.10.11; Wed, 18 Nov 2020 01:10:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727083AbgKRJGJ (ORCPT + 99 others); Wed, 18 Nov 2020 04:06:09 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:7931 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726020AbgKRJGI (ORCPT ); Wed, 18 Nov 2020 04:06:08 -0500 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4CbcNX1KtNz6wkj; Wed, 18 Nov 2020 17:05:48 +0800 (CST) Received: from [10.174.176.199] (10.174.176.199) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Wed, 18 Nov 2020 17:05:20 +0800 Subject: Re: workqueue: Only kick a worker after thawed or for an unbound workqueue From: Yunfeng Ye To: Lai Jiangshan CC: Tejun Heo , LKML , Shiyuan Hu , Hewenliang References: <6e174c9f-5436-7d1c-0443-3ca21ff8dad7@huawei.com> Message-ID: Date: Wed, 18 Nov 2020 17:05:20 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <6e174c9f-5436-7d1c-0443-3ca21ff8dad7@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.176.199] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/11/18 14:26, Yunfeng Ye wrote: > > > On 2020/11/18 12:06, Lai Jiangshan wrote: >> On Tue, Nov 17, 2020 at 3:33 PM Yunfeng Ye wrote: >>> >>> In realtime scenario, We do not want to have interference on the >>> isolated cpu cores. but when invoking alloc_workqueue() for percpu wq >>> on the housekeeping cpu, it kick a kworker on the isolated cpu. >>> >>> alloc_workqueue >>> pwq_adjust_max_active >>> wake_up_worker >>> >>> The comment in pwq_adjust_max_active() said: >>> "Need to kick a worker after thawed or an unbound wq's >>> max_active is bumped" >>> >>> So it is unnecessary to kick a kworker for percpu wq's when >>> alloc_workqueue. this patch only kick a worker after thawed or for an >>> unbound workqueue. >>> >>> Signed-off-by: Yunfeng Ye >>> --- >>> kernel/workqueue.c | 18 +++++++++++++----- >>> 1 file changed, 13 insertions(+), 5 deletions(-) >>> >>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c >>> index c41c3c17b86a..80f7bbd4889f 100644 >>> --- a/kernel/workqueue.c >>> +++ b/kernel/workqueue.c >>> @@ -3696,14 +3696,16 @@ static void pwq_unbound_release_workfn(struct work_struct *work) >>> } >>> >>> /** >>> - * pwq_adjust_max_active - update a pwq's max_active to the current setting >>> + * pwq_adjust_max_active_and_kick - update a pwq's max_active to the current setting >>> * @pwq: target pool_workqueue >>> + * @force_kick: force to kick a worker >>> * >>> * If @pwq isn't freezing, set @pwq->max_active to the associated >>> * workqueue's saved_max_active and activate delayed work items >>> * accordingly. If @pwq is freezing, clear @pwq->max_active to zero. >>> */ >>> -static void pwq_adjust_max_active(struct pool_workqueue *pwq) >>> +static void pwq_adjust_max_active_and_kick(struct pool_workqueue *pwq, >>> + bool force_kick) >>> { >>> struct workqueue_struct *wq = pwq->wq; >>> bool freezable = wq->flags & WQ_FREEZABLE; >>> @@ -3733,9 +3735,10 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) >>> >>> /* >>> * Need to kick a worker after thawed or an unbound wq's >>> - * max_active is bumped. It's a slow path. Do it always. >>> + * max_active is bumped. >> >> >> Hello >> >> Thanks for reporting the problem. >> >> But I don't like to add an argument. The waking up is called >> always just because it was considered no harm and it is slow >> path. But it can still be possible to detect if the waking up >> is really needed based on the actual activation of delayed works. >> >> The previous lines are: >> >> while (!list_empty(&pwq->delayed_works) && >> pwq->nr_active < pwq->max_active) >> pwq_activate_first_delayed(pwq); >> >> And you can record the old pwq->nr_active before these lines: >> >> int old_nr_active = pwq->nr_active; >> >> while (!list_empty(&pwq->delayed_works) && >> pwq->nr_active < pwq->max_active) >> pwq_activate_first_delayed(pwq); >> >> /* please add more comments here, see 951a078a5 */ >> if (old_nr_active < pwq->nr_active) { >> if (!old_nr_active || (wq->flags & WQ_UNBOUND)) >> wake_up_worker(pwq->pool); >> } >> > Ok, I will send a patch v2. > Thanks. > I think it is unnecessary to distinguish the percpu or unbound's wq, kick a worker always based on the actual activation of delayed works. Look like this: diff --git a/kernel/workqueue.c b/kernel/workqueue.c index c41c3c17b86a..cd551dcb2cc9 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3725,17 +3725,23 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) * is updated and visible. */ if (!freezable || !workqueue_freezing) { + bool kick = false; + pwq->max_active = wq->saved_max_active; while (!list_empty(&pwq->delayed_works) && - pwq->nr_active < pwq->max_active) + pwq->nr_active < pwq->max_active) { pwq_activate_first_delayed(pwq); + kick = true; + } /* * Need to kick a worker after thawed or an unbound wq's - * max_active is bumped. It's a slow path. Do it always. + * max_active is bumped. It's a slow path. Do it always + * based on the actual activation of delayed works. */ - wake_up_worker(pwq->pool); + if (kick) + wake_up_worker(pwq->pool); } else { pwq->max_active = 0; } Is it OK? Thanks. >> >> Thanks for your work. >> Lai. >> >>> */ >>> - wake_up_worker(pwq->pool); >>> + if (force_kick || (wq->flags & WQ_UNBOUND)) >>> + wake_up_worker(pwq->pool); >>> } else { >>> pwq->max_active = 0; >>> } >>> @@ -3743,6 +3746,11 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) >>> raw_spin_unlock_irqrestore(&pwq->pool->lock, flags); >>> } >>> >>> +static void pwq_adjust_max_active(struct pool_workqueue *pwq) >>> +{ >>> + pwq_adjust_max_active_and_kick(pwq, false); >>> +} >>> + >>> /* initialize newly alloced @pwq which is associated with @wq and @pool */ >>> static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq, >>> struct worker_pool *pool) >>> @@ -5252,7 +5260,7 @@ void thaw_workqueues(void) >>> list_for_each_entry(wq, &workqueues, list) { >>> mutex_lock(&wq->mutex); >>> for_each_pwq(pwq, wq) >>> - pwq_adjust_max_active(pwq); >>> + pwq_adjust_max_active_and_kick(pwq, true); >>> mutex_unlock(&wq->mutex); >>> } >>> >>> -- >>> 2.18.4 >> . >>