2022-11-04 11:37:57

by Binglei Wang

[permalink] [raw]
Subject: [PATCH v2] workqueue: make workers threads stick to HK_TYPE_KTHREAD cpumask

From: Binglei Wang <[email protected]>

When worker thread newly created or rebinded to hotplug oneline cpu,
set its affinity to HK_TYPE_KTHREAD cpumask.
Make workers threads stick to HK_TYPE_KTHREAD cpumask all the time
to keep the explicitly isolated(nohz_full) cpus away from interference.

Signed-off-by: Binglei Wang <[email protected]>
Reported-by: kernel test robot <[email protected]>
---

Notes:
v1 -> v2 : fix robot warning and error

v1: https://lkml.org/lkml/2022/11/2/1566
All error/warnings (new ones prefixed by >>):

>> kernel/workqueue.c:1958:11: error: incompatible pointer types assigning to 'const struct cupmask *' from 'const struct cpumask *' [-Werror,-Wincompatible-pointer-types]
cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> kernel/workqueue.c:1959:42: warning: pointer type mismatch ('const struct cupmask *' and 'struct cpumask *') [-Wpointer-type-mismatch]
kthread_bind_mask(worker->task, cpumask ? cpumask : pool->attrs->cpumask);
^ ~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
1 warning and 1 error generated.

kernel/workqueue.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7cd5f5e7e..3a780f1a1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1928,6 +1928,7 @@ static struct worker *create_worker(struct worker_pool *pool)
struct worker *worker;
int id;
char id_buf[16];
+ const struct cpumask *cpumask;

/* ID is needed to determine kthread name */
id = ida_alloc(&pool->worker_ida, GFP_KERNEL);
@@ -1952,7 +1953,12 @@ static struct worker *create_worker(struct worker_pool *pool)
goto fail;

set_user_nice(worker->task, pool->attrs->nice);
- kthread_bind_mask(worker->task, pool->attrs->cpumask);
+
+ if (housekeeping_enabled(HK_TYPE_KTHREAD))
+ cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
+ else
+ cpumask = (const struct cpumask *)pool->attrs->cpumask;
+ kthread_bind_mask(worker->task, cpumask);

/* successful, attach the worker to the pool */
worker_attach_to_pool(worker, pool);
@@ -5027,20 +5033,26 @@ static void unbind_workers(int cpu)
static void rebind_workers(struct worker_pool *pool)
{
struct worker *worker;
+ const struct cpumask *cpumask = NULL;

lockdep_assert_held(&wq_pool_attach_mutex);

+ if (housekeeping_enabled(HK_TYPE_KTHREAD))
+ cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
+
/*
* Restore CPU affinity of all workers. As all idle workers should
* be on the run-queue of the associated CPU before any local
* wake-ups for concurrency management happen, restore CPU affinity
* of all workers first and then clear UNBOUND. As we're called
* from CPU_ONLINE, the following shouldn't fail.
+ *
+ * Also consider the housekeeping HK_TYPE_KTHREAD cpumask.
*/
for_each_pool_worker(worker, pool) {
kthread_set_per_cpu(worker->task, pool->cpu);
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
- pool->attrs->cpumask) < 0);
+ cpumask ? cpumask : pool->attrs->cpumask) < 0);
}

raw_spin_lock_irq(&pool->lock);
--
2.27.0