2022-08-04 08:45:09

by Lai Jiangshan

[permalink] [raw]
Subject: [RFC PATCH 2/8] workqueue: Make create_worker() safe against prematurely wakeups

From: Lai Jiangshan <[email protected]>

A system crashed with the following BUG() report:

[115147.050484] BUG: kernel NULL pointer dereference, address: 0000000000000000
[115147.050488] #PF: supervisor write access in kernel mode
[115147.050489] #PF: error_code(0x0002) - not-present page
[115147.050490] PGD 0 P4D 0
[115147.050494] Oops: 0002 [#1] PREEMPT_RT SMP NOPTI
[115147.050498] CPU: 1 PID: 16213 Comm: kthreadd Kdump: loaded Tainted: G O X 5.3.18-2-rt #1 SLE15-SP2 (unreleased)
[115147.050510] RIP: 0010:_raw_spin_lock_irq+0x14/0x30
[115147.050513] Code: 89 c6 e8 5f 7a 9b ff 66 90 c3 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 fa 65 ff 05 fb 53 6c 55 31 c0 ba 01 00 00 00 <f0> 0f b1 17 75 01 c3 89 c6 e8 2e 7a 9b ff 66 90 c3 90 90 90 90 90
[115147.050514] RSP: 0018:ffffb0f68822fed8 EFLAGS: 00010046
[115147.050515] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[115147.050516] RDX: 0000000000000001 RSI: 0000000000000002 RDI: 0000000000000000
[115147.050517] RBP: ffff9ca73af40a40 R08: 0000000000000001 R09: 0000000000027340
[115147.050519] R10: ffffb0f68822fe70 R11: 00000000000000a9 R12: ffffb0f688067dc0
[115147.050520] R13: ffff9ca77e9a8000 R14: ffff9ca7634ca780 R15: ffff9ca7634ca780
[115147.050521] FS: 0000000000000000(0000) GS:ffff9ca77fb00000(0000) knlGS:0000000000000000
[115147.050523] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[115147.050524] CR2: 00000000000000b8 CR3: 000000004472e000 CR4: 00000000003406e0
[115147.050524] Call Trace:
[115147.050533] worker_thread+0xb4/0x3c0
[115147.050538] ? process_one_work+0x4a0/0x4a0
[115147.050540] kthread+0x152/0x170
[115147.050542] ? kthread_park+0xa0/0xa0
[115147.050544] ret_from_fork+0x35/0x40

Further debugging shown that the worker thread was woken before
worker_attach_to_pool() finished in create_worker() though the reason
why it was woken up is unknown yet which might be some real-time kernel
activities.

Any kthread is supposed to stay in TASK_UNINTERRUPTIBLE sleep
until it is explicitly woken. But a spurious wakeup might
break this expectation.

As a result, worker_thread() might read worker->pool before
it was set in worker create_worker() by worker_attach_to_pool().
Or it might leave idle before entering idle or process work items
before attached for the pool.

Also manage_workers() might want to create yet another worker
before worker->pool->nr_workers is updated. It is a kind off
a chicken & egg problem.

Synchronize these operations using a completion API. There are two
ways for the synchronization: either the manager does the worker
initialization and the newly created worker waits for the completion of
the initialization or the newly created worker does the worker
initialization for itself and the manager waits for the completion.

In current code, the manager does the worker initialization with the
dependence that kthread API ensure the worker to be TASK_UNINTERRUPTIBLE.

The ensuring is fragile, so one way of the synchronizations should be
chosen and the dependence should be avoided.

The newly created worker doing the worker initialization can simplify
the code further, so the second way is chosen.

Note that worker->pool might be then read without wq_pool_attach_mutex.
Normal worker always belongs to the same pool and the locking rules
for it is updated a bit.

Also note that rescuer_thread() does not need this because all
needed values are set before the kthreads is created. It is
connected with a particular workqueue. It is attached to different
pools when needed.

Cc: Linus Torvalds <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Tejun Heo <[email protected]>,
Cc: Petr Mladek <[email protected]>
Cc: Michal Hocko <[email protected]>,
Cc: Peter Zijlstra <[email protected]>,
Cc: Wedson Almeida Filho <[email protected]>
Signed-off-by: Lai Jiangshan <[email protected]>
---
kernel/workqueue.c | 22 ++++++++++++++--------
kernel/workqueue_internal.h | 11 +++++++++--
2 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 928aad7d6123..f5b12c6778cc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -176,6 +176,7 @@ struct worker_pool {
/* L: hash of busy workers */

struct worker *manager; /* L: purely informational */
+ struct completion created; /* create_worker(): worker created */
struct list_head workers; /* A: attached workers */
struct completion *detach_completion; /* all workers detached */

@@ -1942,6 +1943,7 @@ static struct worker *create_worker(struct worker_pool *pool)
goto fail;

worker->id = id;
+ worker->pool = pool;

if (pool->cpu >= 0)
snprintf(id_buf, sizeof(id_buf), "%d:%d%s", pool->cpu, id,
@@ -1949,6 +1951,7 @@ static struct worker *create_worker(struct worker_pool *pool)
else
snprintf(id_buf, sizeof(id_buf), "u%d:%d", pool->id, id);

+ reinit_completion(&pool->created);
worker->task = kthread_create_on_node(worker_thread, worker, pool->node,
"kworker/%s", id_buf);
if (IS_ERR(worker->task))
@@ -1957,15 +1960,9 @@ static struct worker *create_worker(struct worker_pool *pool)
set_user_nice(worker->task, pool->attrs->nice);
kthread_bind_mask(worker->task, pool->attrs->cpumask);

- /* successful, attach the worker to the pool */
- worker_attach_to_pool(worker, pool);
-
/* start the newly created worker */
- raw_spin_lock_irq(&pool->lock);
- worker->pool->nr_workers++;
- worker_enter_idle(worker);
wake_up_process(worker->task);
- raw_spin_unlock_irq(&pool->lock);
+ wait_for_completion(&pool->created);

return worker;

@@ -2383,10 +2380,17 @@ static int worker_thread(void *__worker)
struct worker *worker = __worker;
struct worker_pool *pool = worker->pool;

+ /* attach the worker to the pool */
+ worker_attach_to_pool(worker, pool);
+
/* tell the scheduler that this is a workqueue worker */
set_pf_worker(true);
-woke_up:
+
raw_spin_lock_irq(&pool->lock);
+ worker->pool->nr_workers++;
+ worker_enter_idle(worker);
+ complete(&pool->created);
+woke_up:

/* am I supposed to die? */
if (unlikely(worker->flags & WORKER_DIE)) {
@@ -2458,6 +2462,7 @@ static int worker_thread(void *__worker)
__set_current_state(TASK_IDLE);
raw_spin_unlock_irq(&pool->lock);
schedule();
+ raw_spin_lock_irq(&pool->lock);
goto woke_up;
}

@@ -3461,6 +3466,7 @@ static int init_worker_pool(struct worker_pool *pool)

timer_setup(&pool->mayday_timer, pool_mayday_timeout, 0);

+ init_completion(&pool->created);
INIT_LIST_HEAD(&pool->workers);

ida_init(&pool->worker_ida);
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
index e00b1204a8e9..025861c4d1f6 100644
--- a/kernel/workqueue_internal.h
+++ b/kernel/workqueue_internal.h
@@ -37,8 +37,15 @@ struct worker {
/* 64 bytes boundary on 64bit, 32 on 32bit */

struct task_struct *task; /* I: worker task */
- struct worker_pool *pool; /* A: the associated pool */
- /* L: for rescuers */
+
+ /*
+ * The associated pool, locking rules:
+ * PF_WQ_WORKER: from the current worker
+ * PF_WQ_WORKER && wq_pool_attach_mutex: from remote tasks
+ * None: from the current worker when the worker is coming up
+ */
+ struct worker_pool *pool;
+
struct list_head node; /* A: anchored at pool->workers */
/* A: runs through worker->node */

--
2.19.1.6.gb485710b



2022-08-05 02:49:35

by Lai Jiangshan

[permalink] [raw]
Subject: Re: [RFC PATCH 2/8] workqueue: Make create_worker() safe against prematurely wakeups

i

On Thu, Aug 4, 2022 at 8:35 PM Hillf Danton <[email protected]> wrote:
>
> On Thu, 4 Aug 2022 16:41:29 +0800 Lai Jiangshan wrote:
> >
> > @@ -1942,6 +1943,7 @@ static struct worker *create_worker(struct worker_pool *pool)
> > goto fail;
> >
> > worker->id = id;
> > + worker->pool = pool;
> >
> > if (pool->cpu >= 0)
> > snprintf(id_buf, sizeof(id_buf), "%d:%d%s", pool->cpu, id,
> > @@ -1949,6 +1951,7 @@ static struct worker *create_worker(struct worker_pool *pool)
> > else
> > snprintf(id_buf, sizeof(id_buf), "u%d:%d", pool->id, id);
> >
> > + reinit_completion(&pool->created);
> > worker->task = kthread_create_on_node(worker_thread, worker, pool->node,
> > "kworker/%s", id_buf);
> > if (IS_ERR(worker->task))
> > @@ -1957,15 +1960,9 @@ static struct worker *create_worker(struct worker_pool *pool)
> > set_user_nice(worker->task, pool->attrs->nice);
> > kthread_bind_mask(worker->task, pool->attrs->cpumask);
> >
> > - /* successful, attach the worker to the pool */
> > - worker_attach_to_pool(worker, pool);
> > -
> > /* start the newly created worker */
> > - raw_spin_lock_irq(&pool->lock);
> > - worker->pool->nr_workers++;
> > - worker_enter_idle(worker);
> > wake_up_process(worker->task);
> > - raw_spin_unlock_irq(&pool->lock);
> > + wait_for_completion(&pool->created);
> >
> > return worker;
>
> cpu0 cpu1 cpu2
> === === ===
> complete
>
> reinit_completion
> wait_for_completion

reinit_completion() and wait_for_completion() are both in
create_worker(). create_worker() itself is mutually exclusive
which means no two create_worker()s running at the same time
for the same pool.

No work item can be added before the first initial create_worker()
returns for a new or first-online per-cpu pool, so there would be no
manager for the pool during the first initial create_worker().

The manager is the only worker who can call create_worker() for a pool
except the first initial create_worker().

And there is always only one manager after the first initial
create_worker().

The document style in some of workqueue code is:
"/* locking rule: what it is */"

For example:
struct list_head worklist; /* L: list of pending works */
which means it is protected by pool->lock.

And for
struct completion created; /* create_worker(): worker created */
it means it is protected by the exclusive create_worker().

>
> Any chance for race above?

2022-08-16 22:09:22

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC PATCH 2/8] workqueue: Make create_worker() safe against prematurely wakeups

On Thu, Aug 04, 2022 at 04:41:29PM +0800, Lai Jiangshan wrote:
...
> Any kthread is supposed to stay in TASK_UNINTERRUPTIBLE sleep
> until it is explicitly woken. But a spurious wakeup might
> break this expectation.

I'd rephrase the above. It's more that we can't assume that a sleeping task
will stay sleeping and should expect spurious wakeups.

> @@ -176,6 +176,7 @@ struct worker_pool {
> /* L: hash of busy workers */
>
> struct worker *manager; /* L: purely informational */
> + struct completion created; /* create_worker(): worker created */

Can we define sth like worker_create_args struct which contains a
completion, pointers to the worker and pool and use an on-stack instance to
carry the create parameters to the new worker thread? It's kinda odd to have
a persistent copy of completion in the pool

> @@ -1949,6 +1951,7 @@ static struct worker *create_worker(struct worker_pool *pool)
> else
> snprintf(id_buf, sizeof(id_buf), "u%d:%d", pool->id, id);
>
> + reinit_completion(&pool->created);

which keeps getting reinitialized.

> @@ -2383,10 +2380,17 @@ static int worker_thread(void *__worker)
> struct worker *worker = __worker;
> struct worker_pool *pool = worker->pool;
>
> + /* attach the worker to the pool */
> + worker_attach_to_pool(worker, pool);

It's also odd for the new worker to have pool already set and then we attach
to that pool.

> @@ -37,8 +37,15 @@ struct worker {
> /* 64 bytes boundary on 64bit, 32 on 32bit */
>
> struct task_struct *task; /* I: worker task */
> - struct worker_pool *pool; /* A: the associated pool */
> - /* L: for rescuers */
> +
> + /*
> + * The associated pool, locking rules:
> + * PF_WQ_WORKER: from the current worker
> + * PF_WQ_WORKER && wq_pool_attach_mutex: from remote tasks
> + * None: from the current worker when the worker is coming up
> + */
> + struct worker_pool *pool;

I have a difficult time understanding the above comment. Can you please
follow the same style as others?

I was hoping that this problem would be fixed through kthread changes but
that doesn't seem to have happened yet and given that we need to keep
modifying cpumasks dynamically anyway (e.g. for unbound pool config change),
solving it from wq side is fine too especially if we can leverage the same
code paths that the dynamic changes are using.

That said, some of complexities are from CPU hotplug messing with worker
cpumasks and wq trying to restore them and it seems likely that all these
will be simpler with the persistent cpumask that Waiman is working on. Lai,
can you please take a look at that patchset?

Thanks.

--
tejun