2022-12-12 07:30:06

by richard clark

[permalink] [raw]
Subject: [PATCH] workqueue: Prevent a new work item from queueing into a destruction wq

Currently the __WQ_DRAINING is used to prevent a new work item from queueing
to a draining workqueue, but this flag will be cleared before the end of a
RCU grace period. Because the workqueue instance is actually freed after
the RCU grace period, this fact results in an opening window in which a new
work item can be queued into a destorying workqueue and be scheduled
consequently, for instance, the below code snippet demos this accident:

static void work_func(struct work_struct *work)
{
pr_info("%s scheduled\n", __func__);
}
static DECLARE_WORK(w0, work_func);

struct workqueue_struct * wq0 = alloc_workqueue("wq0", 0, 0);
destroy_workqueue(wq0);
queue_work_on(1, wq0, &w0);

The above work_func(...) can be scheduled by a destroying workqueue.

This patch will close that window by introducing a new flag __WQ_DESTROYING,
which will keep valid until the end of a RCU grace period. With this commit,
the above code will trigger a WARNING message and return directly from
__queue_work(...), like this:

WARNING: CPU: 7 PID: 3994 at kernel/workqueue.c:1438 __queue_work+0x3ec/0x580

Signed-off-by: Richard Clark <[email protected]>
---
include/linux/workqueue.h | 1 +
kernel/workqueue.c | 11 ++++++++---
2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index a0143dd24430..ac551b8ee7d9 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -335,6 +335,7 @@ enum {
*/
WQ_POWER_EFFICIENT = 1 << 7,

+ __WQ_DESTROYING = 1 << 15, /* internal: workqueue is destroying */
__WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
__WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
__WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 39060a5d0905..09527c9db9cb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1433,9 +1433,9 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
lockdep_assert_irqs_disabled();


- /* if draining, only works from the same workqueue are allowed */
- if (unlikely(wq->flags & __WQ_DRAINING) &&
- WARN_ON_ONCE(!is_chained_work(wq)))
+ /* if destroying or draining, only works from the same workqueue are allowed */
+ if (unlikely(wq->flags & (__WQ_DESTROYING | __WQ_DRAINING)
+ && WARN_ON_ONCE(!is_chained_work(wq))))
return;
rcu_read_lock();
retry:
@@ -4414,6 +4414,11 @@ void destroy_workqueue(struct workqueue_struct *wq)
*/
workqueue_sysfs_unregister(wq);

+ /* mark the workqueue destruction is in progress */
+ mutex_lock(&wq->mutex);
+ wq->flags |= __WQ_DESTROYING;
+ mutex_unlock(&wq->mutex);
+
/* drain it before proceeding with destruction */
drain_workqueue(wq);

--
2.37.2