2012-07-09 18:41:49

by Tejun Heo

[permalink] [raw]
Subject: [PATCHSET] workqueue: reimplement high priority using a separate worker pool

Currently, WQ_HIGHPRI workqueues share the same worker pool as the
normal priority ones. The only difference is that work items from
highpri wq are queued at the head instead of tail of the worklist. On
pathological cases, this simplistics highpri implementation doesn't
seem to be sufficient.

For example, block layer request_queue delayed processing uses high
priority delayed_work to restart request processing after a short
delay. Unfortunately, it doesn't seem to take too much to push the
latency between the delay timer expiring and the work item execution
to few second range leading to unintended long idling of the
underlying device. There seem to be real-world cases where this
latency shows up[1].

A simplistic test case is measuring queue-to-execution latencies with
a lot of threads saturating CPU cycles. Measuring over 300sec period
with 3000 0-nice threads performing 1ms sleeps continuously and a
highpri work item being repeatedly queued with 1 jiffy interval on a
single CPU machine, the top latency was 1624ms and the average of top
20 was 1268ms with stdev 927ms.

This patchset reimplements high priority workqueues so that it uses a
separate worklist and worker pool. Now each global_cwq contains two
worker_pools - one for normal priority work items and the other for
high priority. Each has its own worklist and worker pool and the
highpri worker pool is populated with worker threads w/ -20 nice
value.

This reimplementation brings down the top latency to 16ms with top 20
average of 3.8ms w/ stdev 5.6ms. The original block layer bug hasn't
been verfieid to be fixed yet (Josh?).

The addition of separate worker pools doesn't add much to the
complexity but does add more threads per cpu. Highpri worker pool is
expected to remain small, but the effect is noticeable especially in
idle states.

I'm cc'ing all WQ_HIGHPRI users - block, bio-integrity, crypto, gfs2,
xfs and bluetooth. Now you guys get proper high priority scheduling
for highpri work items; however, with more power comes more
responsibility.

Especially, the ones with both WQ_HIGHPRI and WQ_CPU_INTENSIVE -
bio-integrity and crypto - may end up dominating CPU usage. I think
it should be mostly okay for bio-integrity considering it sits right
in the block request completion path. I don't know enough about
tegra-aes tho. aes_workqueue_handler() seems to mostly interact with
the hardware crypto. Is it actually cpu cycle intensive?

This patchset contains the following six patches.

0001-workqueue-don-t-use-WQ_HIGHPRI-for-unbound-workqueue.patch
0002-workqueue-factor-out-worker_pool-from-global_cwq.patch
0003-workqueue-use-pool-instead-of-gcwq-or-cpu-where-appl.patch
0004-workqueue-separate-out-worker_pool-flags.patch
0005-workqueue-introduce-NR_WORKER_POOLS-and-for_each_wor.patch
0006-workqueue-reimplement-WQ_HIGHPRI-using-a-separate-wo.patch

0001 makes unbound wq not use WQ_HIGHPRI as its meaning will be
changing and won't suit the purpose unbound wq is using it for.

0002-0005 gradually pulls out worker_pool from global_cwq and update
code paths to be able to deal with multiple worker_pools per
global_cwq.

0006 replaces the head-queueing WQ_HIGHPRI implementation with the one
with separate worker_pool using the multiple worker_pool mechanism
previously implemented.

The patchset is available in the following git branch.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git review-wq-highpri

diffstat follows.

Documentation/workqueue.txt | 103 ++----
include/trace/events/workqueue.h | 2
kernel/workqueue.c | 624 +++++++++++++++++++++------------------
3 files changed, 385 insertions(+), 344 deletions(-)

Thanks.

--
tejun

[1] https://lkml.org/lkml/2012/3/6/475

_______________________________________________
xfs mailing list
[email protected]
http://oss.sgi.com/mailman/listinfo/xfs


2012-07-09 18:41:51

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool. worker_pool points back to the containing
gcwq. worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places. This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <[email protected]>
---
include/trace/events/workqueue.h | 2 +-
kernel/workqueue.c | 216 ++++++++++++++++++++-----------------
2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
__entry->function = work->func;
__entry->workqueue = cwq->wq;
__entry->req_cpu = req_cpu;
- __entry->cpu = cwq->gcwq->cpu;
+ __entry->cpu = cwq->pool->gcwq->cpu;
),

TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..bc43a0c 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
*/

struct global_cwq;
+struct worker_pool;

/*
* The poor guys doing the actual heavy lifting. All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
- struct global_cwq *gcwq; /* I: the associated gcwq */
+ struct worker_pool *pool; /* I: the associated pool */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
@@ -139,6 +140,21 @@ struct worker {
struct work_struct rebind_work; /* L: rebind worker to cpu */
};

+struct worker_pool {
+ struct global_cwq *gcwq; /* I: the owning gcwq */
+
+ struct list_head worklist; /* L: list of pending works */
+ int nr_workers; /* L: total number of workers */
+ int nr_idle; /* L: currently idle ones */
+
+ struct list_head idle_list; /* X: list of idle workers */
+ struct timer_list idle_timer; /* L: worker idle timeout */
+ struct timer_list mayday_timer; /* L: SOS timer for dworkers */
+
+ struct ida worker_ida; /* L: for worker IDs */
+ struct worker *first_idle; /* L: first idle worker */
+};
+
/*
* Global per-cpu workqueue. There's one and only one for each cpu
* and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
*/
struct global_cwq {
spinlock_t lock; /* the gcwq lock */
- struct list_head worklist; /* L: list of pending works */
unsigned int cpu; /* I: the associated cpu */
unsigned int flags; /* L: GCWQ_* flags */

- int nr_workers; /* L: total number of workers */
- int nr_idle; /* L: currently idle ones */
-
- /* workers are chained either in the idle_list or busy_hash */
- struct list_head idle_list; /* X: list of idle workers */
+ /* workers are chained either in busy_head or pool idle_list */
struct hlist_head busy_hash[BUSY_WORKER_HASH_SIZE];
/* L: hash of busy workers */

- struct timer_list idle_timer; /* L: worker idle timeout */
- struct timer_list mayday_timer; /* L: SOS timer for dworkers */
-
- struct ida worker_ida; /* L: for worker IDs */
+ struct worker_pool pool; /* the worker pools */

struct task_struct *trustee; /* L: for gcwq shutdown */
unsigned int trustee_state; /* L: trustee state */
wait_queue_head_t trustee_wait; /* trustee wait */
- struct worker *first_idle; /* L: first idle worker */
} ____cacheline_aligned_in_smp;

/*
@@ -175,7 +182,7 @@ struct global_cwq {
* aligned at two's power of the number of flag bits.
*/
struct cpu_workqueue_struct {
- struct global_cwq *gcwq; /* I: the associated gcwq */
+ struct worker_pool *pool; /* I: the associated pool */
struct workqueue_struct *wq; /* I: the owning workqueue */
int work_color; /* L: current color */
int flush_color; /* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)

if (data & WORK_STRUCT_CWQ)
return ((struct cpu_workqueue_struct *)
- (data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+ (data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;

cpu = data >> WORK_STRUCT_FLAG_BITS;
if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
*/
static bool need_more_worker(struct global_cwq *gcwq)
{
- return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+ return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
}

/* Can I start working? Called from busy but !running workers. */
static bool may_start_working(struct global_cwq *gcwq)
{
- return gcwq->nr_idle;
+ return gcwq->pool.nr_idle;
}

/* Do I need to keep working? Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
{
atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);

- return !list_empty(&gcwq->worklist) &&
+ return !list_empty(&gcwq->pool.worklist) &&
(atomic_read(nr_running) <= 1 ||
gcwq->flags & GCWQ_HIGHPRI_PENDING);
}
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
static bool too_many_workers(struct global_cwq *gcwq)
{
bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
- int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
- int nr_busy = gcwq->nr_workers - nr_idle;
+ int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+ int nr_busy = gcwq->pool.nr_workers - nr_idle;

return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
}
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
/* Return the first worker. Safe with preemption disabled */
static struct worker *first_worker(struct global_cwq *gcwq)
{
- if (unlikely(list_empty(&gcwq->idle_list)))
+ if (unlikely(list_empty(&gcwq->pool.idle_list)))
return NULL;

- return list_first_entry(&gcwq->idle_list, struct worker, entry);
+ return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
}

/**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
unsigned int cpu)
{
struct worker *worker = kthread_data(task), *to_wakeup = NULL;
- struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
atomic_t *nr_running = get_gcwq_nr_running(cpu);

if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
* could be manipulating idle_list, so dereferencing idle_list
* without gcwq lock is safe.
*/
- if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+ if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
to_wakeup = first_worker(gcwq);
return to_wakeup ? to_wakeup->task : NULL;
}
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
static inline void worker_set_flags(struct worker *worker, unsigned int flags,
bool wakeup)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;

WARN_ON_ONCE(worker->task != current);

@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,

if (wakeup) {
if (atomic_dec_and_test(nr_running) &&
- !list_empty(&gcwq->worklist))
+ !list_empty(&pool->worklist))
wake_up_worker(gcwq);
} else
atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
*/
static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct global_cwq *gcwq = worker->pool->gcwq;
unsigned int oflags = worker->flags;

WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
struct work_struct *twork;

if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
- return &gcwq->worklist;
+ return &gcwq->pool.worklist;

- list_for_each_entry(twork, &gcwq->worklist, entry) {
+ list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);

if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
struct work_struct *work, struct list_head *head,
unsigned int extra_flags)
{
- struct global_cwq *gcwq = cwq->gcwq;
+ struct global_cwq *gcwq = cwq->pool->gcwq;

/* we own @work, set data and link */
set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
*/
static void worker_enter_idle(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;

BUG_ON(worker->flags & WORKER_IDLE);
BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)

/* can't use worker_set_flags(), also called from start_worker() */
worker->flags |= WORKER_IDLE;
- gcwq->nr_idle++;
+ pool->nr_idle++;
worker->last_active = jiffies;

/* idle_list is LIFO */
- list_add(&worker->entry, &gcwq->idle_list);
+ list_add(&worker->entry, &pool->idle_list);

if (likely(!(worker->flags & WORKER_ROGUE))) {
- if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
- mod_timer(&gcwq->idle_timer,
+ if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+ mod_timer(&pool->idle_timer,
jiffies + IDLE_WORKER_TIMEOUT);
} else
wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
* warning may trigger spuriously. Check iff trustee is idle.
*/
WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
- gcwq->nr_workers == gcwq->nr_idle &&
+ pool->nr_workers == pool->nr_idle &&
atomic_read(get_gcwq_nr_running(gcwq->cpu)));
}

@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
*/
static void worker_leave_idle(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;

BUG_ON(!(worker->flags & WORKER_IDLE));
worker_clr_flags(worker, WORKER_IDLE);
- gcwq->nr_idle--;
+ pool->nr_idle--;
list_del_init(&worker->entry);
}

@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
static bool worker_maybe_bind_and_lock(struct worker *worker)
__acquires(&gcwq->lock)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct global_cwq *gcwq = worker->pool->gcwq;
struct task_struct *task = worker->task;

while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
static void worker_rebind_fn(struct work_struct *work)
{
struct worker *worker = container_of(work, struct worker, rebind_work);
- struct global_cwq *gcwq = worker->gcwq;
+ struct global_cwq *gcwq = worker->pool->gcwq;

if (worker_maybe_bind_and_lock(worker))
worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
{
bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+ struct worker_pool *pool = &gcwq->pool;
struct worker *worker = NULL;
int id = -1;

spin_lock_irq(&gcwq->lock);
- while (ida_get_new(&gcwq->worker_ida, &id)) {
+ while (ida_get_new(&pool->worker_ida, &id)) {
spin_unlock_irq(&gcwq->lock);
- if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+ if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
goto fail;
spin_lock_irq(&gcwq->lock);
}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
if (!worker)
goto fail;

- worker->gcwq = gcwq;
+ worker->pool = pool;
worker->id = id;

if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
fail:
if (id >= 0) {
spin_lock_irq(&gcwq->lock);
- ida_remove(&gcwq->worker_ida, id);
+ ida_remove(&pool->worker_ida, id);
spin_unlock_irq(&gcwq->lock);
}
kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
static void start_worker(struct worker *worker)
{
worker->flags |= WORKER_STARTED;
- worker->gcwq->nr_workers++;
+ worker->pool->nr_workers++;
worker_enter_idle(worker);
wake_up_process(worker->task);
}
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
*/
static void destroy_worker(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
int id = worker->id;

/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
BUG_ON(!list_empty(&worker->scheduled));

if (worker->flags & WORKER_STARTED)
- gcwq->nr_workers--;
+ pool->nr_workers--;
if (worker->flags & WORKER_IDLE)
- gcwq->nr_idle--;
+ pool->nr_idle--;

list_del_init(&worker->entry);
worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
kfree(worker);

spin_lock_irq(&gcwq->lock);
- ida_remove(&gcwq->worker_ida, id);
+ ida_remove(&pool->worker_ida, id);
}

static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
unsigned long expires;

/* idle_list is kept in LIFO order, check the last one */
- worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+ worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+ entry);
expires = worker->last_active + IDLE_WORKER_TIMEOUT;

if (time_before(jiffies, expires))
- mod_timer(&gcwq->idle_timer, expires);
+ mod_timer(&gcwq->pool.idle_timer, expires);
else {
/* it's been idle for too long, wake up manager */
gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
return false;

/* mayday mayday mayday */
- cpu = cwq->gcwq->cpu;
+ cpu = cwq->pool->gcwq->cpu;
/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
if (cpu == WORK_CPU_UNBOUND)
cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
* allocation deadlock. Send distress signals to
* rescuers.
*/
- list_for_each_entry(work, &gcwq->worklist, entry)
+ list_for_each_entry(work, &gcwq->pool.worklist, entry)
send_mayday(work);
}

spin_unlock_irq(&gcwq->lock);

- mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+ mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
}

/**
@@ -1568,14 +1581,14 @@ restart:
spin_unlock_irq(&gcwq->lock);

/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
- mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+ mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);

while (true) {
struct worker *worker;

worker = create_worker(gcwq, true);
if (worker) {
- del_timer_sync(&gcwq->mayday_timer);
+ del_timer_sync(&gcwq->pool.mayday_timer);
spin_lock_irq(&gcwq->lock);
start_worker(worker);
BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
break;
}

- del_timer_sync(&gcwq->mayday_timer);
+ del_timer_sync(&gcwq->pool.mayday_timer);
spin_lock_irq(&gcwq->lock);
if (need_to_create_worker(gcwq))
goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
struct worker *worker;
unsigned long expires;

- worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+ worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+ entry);
expires = worker->last_active + IDLE_WORKER_TIMEOUT;

if (time_before(jiffies, expires)) {
- mod_timer(&gcwq->idle_timer, expires);
+ mod_timer(&gcwq->pool.idle_timer, expires);
break;
}

@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
*/
static bool manage_workers(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct global_cwq *gcwq = worker->pool->gcwq;
bool ret = false;

if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
{
struct work_struct *work = list_first_entry(&cwq->delayed_works,
struct work_struct, entry);
- struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+ struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);

trace_workqueue_activate_work(work);
move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
__acquires(&gcwq->lock)
{
struct cpu_workqueue_struct *cwq = get_work_cwq(work);
- struct global_cwq *gcwq = cwq->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
struct hlist_head *bwh = busy_worker_head(gcwq, work);
bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
* wake up another worker; otherwise, clear HIGHPRI_PENDING.
*/
if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
- struct work_struct *nwork = list_first_entry(&gcwq->worklist,
- struct work_struct, entry);
+ struct work_struct *nwork = list_first_entry(&pool->worklist,
+ struct work_struct, entry);

- if (!list_empty(&gcwq->worklist) &&
+ if (!list_empty(&pool->worklist) &&
get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
wake_up_worker(gcwq);
else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
static int worker_thread(void *__worker)
{
struct worker *worker = __worker;
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;

/* tell the scheduler that this is a workqueue worker */
worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:

do {
struct work_struct *work =
- list_first_entry(&gcwq->worklist,
+ list_first_entry(&pool->worklist,
struct work_struct, entry);

if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
for_each_mayday_cpu(cpu, wq->mayday_mask) {
unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
- struct global_cwq *gcwq = cwq->gcwq;
+ struct worker_pool *pool = cwq->pool;
+ struct global_cwq *gcwq = pool->gcwq;
struct work_struct *work, *n;

__set_current_state(TASK_RUNNING);
mayday_clear_cpu(cpu, wq->mayday_mask);

/* migrate to the target cpu if possible */
- rescuer->gcwq = gcwq;
+ rescuer->pool = pool;
worker_maybe_bind_and_lock(rescuer);

/*
@@ -2079,7 +2096,7 @@ repeat:
* process'em.
*/
BUG_ON(!list_empty(&rescuer->scheduled));
- list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+ list_for_each_entry_safe(work, n, &pool->worklist, entry)
if (get_work_cwq(work) == cwq)
move_linked_works(work, scheduled, &n);

@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,

for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
- struct global_cwq *gcwq = cwq->gcwq;
+ struct global_cwq *gcwq = cwq->pool->gcwq;

spin_lock_irq(&gcwq->lock);

@@ -2432,9 +2449,9 @@ reflush:
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
bool drained;

- spin_lock_irq(&cwq->gcwq->lock);
+ spin_lock_irq(&cwq->pool->gcwq->lock);
drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
- spin_unlock_irq(&cwq->gcwq->lock);
+ spin_unlock_irq(&cwq->pool->gcwq->lock);

if (drained)
continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
*/
smp_rmb();
cwq = get_work_cwq(work);
- if (unlikely(!cwq || gcwq != cwq->gcwq))
+ if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
goto already_gone;
} else if (wait_executing) {
worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
struct global_cwq *gcwq = get_gcwq(cpu);

BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
- cwq->gcwq = gcwq;
+ cwq->pool = &gcwq->pool;
cwq->wq = wq;
cwq->flush_color = -1;
cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)

gcwq->flags |= GCWQ_MANAGING_WORKERS;

- list_for_each_entry(worker, &gcwq->idle_list, entry)
+ list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
worker->flags |= WORKER_ROGUE;

for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);

spin_unlock_irq(&gcwq->lock);
- del_timer_sync(&gcwq->idle_timer);
+ del_timer_sync(&gcwq->pool.idle_timer);
spin_lock_irq(&gcwq->lock);

/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
* may be frozen works in freezable cwqs. Don't declare
* completion while frozen.
*/
- while (gcwq->nr_workers != gcwq->nr_idle ||
+ while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
gcwq->flags & GCWQ_FREEZING ||
gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
int nr_works = 0;

- list_for_each_entry(work, &gcwq->worklist, entry) {
+ list_for_each_entry(work, &gcwq->pool.worklist, entry) {
send_mayday(work);
nr_works++;
}

- list_for_each_entry(worker, &gcwq->idle_list, entry) {
+ list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
if (!nr_works--)
break;
wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
* all workers till we're canceled.
*/
do {
- rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
- while (!list_empty(&gcwq->idle_list))
- destroy_worker(list_first_entry(&gcwq->idle_list,
+ rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+ while (!list_empty(&gcwq->pool.idle_list))
+ destroy_worker(list_first_entry(&gcwq->pool.idle_list,
struct worker, entry));
- } while (gcwq->nr_workers && rc >= 0);
+ } while (gcwq->pool.nr_workers && rc >= 0);

/*
* At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
* Tell the remaining busy ones to rebind once it finishes the
* currently scheduled works by scheduling the rebind_work.
*/
- WARN_ON(!list_empty(&gcwq->idle_list));
+ WARN_ON(!list_empty(&gcwq->pool.idle_list));

for_each_busy_worker(worker, i, pos, gcwq) {
struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
kthread_bind(new_trustee, cpu);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->first_idle);
+ BUG_ON(gcwq->pool.first_idle);
new_worker = create_worker(gcwq, false);
if (!new_worker) {
if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->first_idle);
- gcwq->first_idle = new_worker;
+ BUG_ON(gcwq->pool.first_idle);
+ gcwq->pool.first_idle = new_worker;
break;

case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
gcwq->trustee_state = TRUSTEE_BUTCHER;
/* fall through */
case CPU_UP_CANCELED:
- destroy_worker(gcwq->first_idle);
- gcwq->first_idle = NULL;
+ destroy_worker(gcwq->pool.first_idle);
+ gcwq->pool.first_idle = NULL;
break;

case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
* take a look.
*/
spin_unlock_irq(&gcwq->lock);
- kthread_bind(gcwq->first_idle->task, cpu);
+ kthread_bind(gcwq->pool.first_idle->task, cpu);
spin_lock_irq(&gcwq->lock);
gcwq->flags |= GCWQ_MANAGE_WORKERS;
- start_worker(gcwq->first_idle);
- gcwq->first_idle = NULL;
+ start_worker(gcwq->pool.first_idle);
+ gcwq->pool.first_idle = NULL;
break;
}

@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
struct global_cwq *gcwq = get_gcwq(cpu);

spin_lock_init(&gcwq->lock);
- INIT_LIST_HEAD(&gcwq->worklist);
+ gcwq->pool.gcwq = gcwq;
+ INIT_LIST_HEAD(&gcwq->pool.worklist);
gcwq->cpu = cpu;
gcwq->flags |= GCWQ_DISASSOCIATED;

- INIT_LIST_HEAD(&gcwq->idle_list);
+ INIT_LIST_HEAD(&gcwq->pool.idle_list);
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
INIT_HLIST_HEAD(&gcwq->busy_hash[i]);

- init_timer_deferrable(&gcwq->idle_timer);
- gcwq->idle_timer.function = idle_worker_timeout;
- gcwq->idle_timer.data = (unsigned long)gcwq;
+ init_timer_deferrable(&gcwq->pool.idle_timer);
+ gcwq->pool.idle_timer.function = idle_worker_timeout;
+ gcwq->pool.idle_timer.data = (unsigned long)gcwq;

- setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+ setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
(unsigned long)gcwq);

- ida_init(&gcwq->worker_ida);
+ ida_init(&gcwq->pool.worker_ida);

gcwq->trustee_state = TRUSTEE_DONE;
init_waitqueue_head(&gcwq->trustee_wait);
--
1.7.7.3

2012-07-09 18:41:55

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist. Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool. NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri. Highpri workers get -20 nice level and has 'H' suffix in
their names. Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

Signed-off-by: Tejun Heo <[email protected]>
Reported-by: Josh Hunt <[email protected]>
LKML-Reference: <[email protected]om>
---
Documentation/workqueue.txt | 103 ++++++++++++++++---------------------------
kernel/workqueue.c | 100 +++++++++++------------------------------
2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.

The cmwq design differentiates between the user-facing workqueues that
subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.

The backend is called gcwq. There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues. Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.

Subsystems and drivers can create and queue work items through special
workqueue API functions as they see fit. They can influence some
aspects of the way the work items are executed by setting flags on the
workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more. To get a detailed overview refer to the API description of
alloc_workqueue() below.

-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq. For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool. For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.

For any worker pool implementation, managing the concurrency level
(how many execution contexts are active) is an important issue. cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
Minimal to save resources and sufficient in that the system is used at
its full capacity.

-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler. The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers. Generally, work items are not expected to
-hog a CPU and consume many cycles. That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal. As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items. This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler. The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers. Generally, work items are
+not expected to hog a CPU and consume many cycles. That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal. As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items. This allows using a minimal number of workers
+without losing execution bandwidth.

Keeping idle workers around doesn't cost other than the memory space
for kthreads, so cmwq holds onto idle ones for a while before killing
them.

For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible. The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible. The responsibility of regulating
concurrency level is on the users. There is also a flag to mark a
bound wq to ignore the concurrency management. Please refer to the
API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.

WQ_HIGHPRI

- Work items of a highpri wq are queued at the head of the
- worklist of the target gcwq and start execution regardless of
- the current concurrency level. In other words, highpri work
- items will always start execution as soon as execution
- resource is available.
+ Work items of a highpri wq are queued to the highpri
+ thread-pool of the target gcwq. Highpri thread-pools are
+ served by worker threads with elevated nice level.

- Ordering among highpri work items is preserved - a highpri
- work item queued after another highpri work item will start
- execution after the earlier highpri work item starts.
-
- Although highpri work items are not held back by other
- runnable work items, they still contribute to the concurrency
- level. Highpri work items in runnable state will prevent
- non-highpri work items from starting execution.
-
- This flag is meaningless for unbound wq.
+ Note that normal and highpri thread-pools don't interact with
+ each other. Each maintain its separate pool of workers and
+ implements concurrency management among its workers.

WQ_CPU_INTENSIVE

Work items of a CPU intensive wq do not contribute to the
concurrency level. In other words, runnable CPU intensive
- work items will not prevent other work items from starting
- execution. This is useful for bound work items which are
- expected to hog CPU cycles so that their execution is
- regulated by the system scheduler.
+ work items will not prevent other work items in the same
+ thread-pool from starting execution. This is useful for bound
+ work items which are expected to hog CPU cycles so that their
+ execution is regulated by the system scheduler.

Although CPU intensive work items don't contribute to the
concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.

This flag is meaningless for unbound wq.

- WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
- This combination makes the wq avoid interaction with
- concurrency management completely and behave as a simple
- per-CPU execution context provider. Work items queued on a
- highpri CPU-intensive wq start execution as soon as resources
- are available and don't affect execution of other work items.
-
@max_active:

@max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
35 w2 wakes up and finishes

Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS EVENT
- 0 w1 and w2 start and burn CPU
- 5 w1 sleeps
- 10 w2 sleeps
- 10 w0 starts and burns CPU
- 15 w0 sleeps
- 15 w1 wakes up and finishes
- 20 w2 wakes up and finishes
- 25 w0 wakes up and burns CPU
- 30 w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,

TIME IN MSECS EVENT
0 w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9cbf3bc..e7f26cb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
/* pool flags */
POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
- POOL_HIGHPRI_PENDING = 1 << 2, /* highpri works on queue */

/* worker flags */
WORKER_STARTED = 1 << 0, /* started */
@@ -74,7 +73,7 @@ enum {
TRUSTEE_RELEASE = 3, /* release workers */
TRUSTEE_DONE = 4, /* trustee is done */

- NR_WORKER_POOLS = 1, /* # worker pools per gcwq */
+ NR_WORKER_POOLS = 2, /* # worker pools per gcwq */

BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
BUSY_WORKER_HASH_SIZE = 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
* all cpus. Give -20.
*/
RESCUER_NICE_LEVEL = -20,
+ HIGHPRI_NICE_LEVEL = -20,
};

/*
@@ -174,7 +174,7 @@ struct global_cwq {
struct hlist_head busy_hash[BUSY_WORKER_HASH_SIZE];
/* L: hash of busy workers */

- struct worker_pool pool; /* the worker pools */
+ struct worker_pool pools[2]; /* normal and highpri pools */

struct task_struct *trustee; /* L: for gcwq shutdown */
unsigned int trustee_state; /* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
#include <trace/events/workqueue.h>

#define for_each_worker_pool(pool, gcwq) \
- for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+ for ((pool) = &(gcwq)->pools[0]; \
+ (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)

#define for_each_busy_worker(worker, i, pos, gcwq) \
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++) \
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {

static int worker_thread(void *__worker);

+static int worker_pool_pri(struct worker_pool *pool)
+{
+ return pool - pool->gcwq->pools;
+}
+
static struct global_cwq *get_gcwq(unsigned int cpu)
{
if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
else
nr_running = &unbound_pool_nr_running;

- return nr_running[0];
+ return nr_running[worker_pool_pri(pool)];
}

static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
}

/*
- * Policy functions. These define the policies on how the global
- * worker pool is managed. Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions. These define the policies on how the global worker
+ * pools are managed. Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
*/

static bool __need_more_worker(struct worker_pool *pool)
{
- return !atomic_read(get_pool_nr_running(pool)) ||
- (pool->flags & POOL_HIGHPRI_PENDING);
+ return !atomic_read(get_pool_nr_running(pool));
}

/*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
{
atomic_t *nr_running = get_pool_nr_running(pool);

- return !list_empty(&pool->worklist) &&
- (atomic_read(nr_running) <= 1 ||
- (pool->flags & POOL_HIGHPRI_PENDING));
+ return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
}

/* Do we need a new worker? Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
}

/**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work. If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue. This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
- struct cpu_workqueue_struct *cwq)
-{
- struct work_struct *twork;
-
- if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
- return &pool->worklist;
-
- list_for_each_entry(twork, &pool->worklist, entry) {
- struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
- if (!(tcwq->wq->flags & WQ_HIGHPRI))
- break;
- }
-
- pool->flags |= POOL_HIGHPRI_PENDING;
- return &twork->entry;
-}
-
-/**
* insert_work - insert a work into gcwq
* @cwq: cwq @work belongs to
* @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
if (likely(cwq->nr_active < cwq->max_active)) {
trace_workqueue_activate_work(work);
cwq->nr_active++;
- worklist = pool_determine_ins_pos(cwq->pool, cwq);
+ worklist = &cwq->pool->worklist;
} else {
work_flags |= WORK_STRUCT_DELAYED;
worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
{
struct global_cwq *gcwq = pool->gcwq;
bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+ const char *pri = worker_pool_pri(pool) ? "H" : "";
struct worker *worker = NULL;
int id = -1;

@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)

if (!on_unbound_cpu)
worker->task = kthread_create_on_node(worker_thread,
- worker,
- cpu_to_node(gcwq->cpu),
- "kworker/%u:%d", gcwq->cpu, id);
+ worker, cpu_to_node(gcwq->cpu),
+ "kworker/%u:%d%s", gcwq->cpu, id, pri);
else
worker->task = kthread_create(worker_thread, worker,
- "kworker/u:%d", id);
+ "kworker/u:%d%s", id, pri);
if (IS_ERR(worker->task))
goto fail;

+ if (worker_pool_pri(pool))
+ set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
/*
* A rogue worker will become a regular one if CPU comes
* online later on. Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
{
struct work_struct *work = list_first_entry(&cwq->delayed_works,
struct work_struct, entry);
- struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);

trace_workqueue_activate_work(work);
- move_linked_works(work, pos, NULL);
+ move_linked_works(work, &cwq->pool->worklist, NULL);
__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
cwq->nr_active++;
}
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
list_del_init(&work->entry);

/*
- * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
- * wake up another worker; otherwise, clear HIGHPRI_PENDING.
- */
- if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
- struct work_struct *nwork = list_first_entry(&pool->worklist,
- struct work_struct, entry);
-
- if (!list_empty(&pool->worklist) &&
- get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
- wake_up_worker(pool);
- else
- pool->flags &= ~POOL_HIGHPRI_PENDING;
- }
-
- /*
* CPU intensive works don't participate in concurrency
* management. They're the scheduler's responsibility.
*/
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
struct global_cwq *gcwq = get_gcwq(cpu);
+ int pool_idx = (bool)(flags & WQ_HIGHPRI);

BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
- cwq->pool = &gcwq->pool;
+ cwq->pool = &gcwq->pools[pool_idx];
cwq->wq = wq;
cwq->flush_color = -1;
cwq->max_active = max_active;
--
1.7.7.3

2012-07-09 18:41:54

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference. This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <[email protected]>
---
kernel/workqueue.c | 225 ++++++++++++++++++++++++++++++++++++----------------
1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e700dcc..9cbf3bc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
TRUSTEE_RELEASE = 3, /* release workers */
TRUSTEE_DONE = 4, /* trustee is done */

+ NR_WORKER_POOLS = 1, /* # worker pools per gcwq */
+
BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
BUSY_WORKER_HASH_SIZE = 1 << BUSY_WORKER_HASH_ORDER,
BUSY_WORKER_HASH_MASK = BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
#define CREATE_TRACE_POINTS
#include <trace/events/workqueue.h>

+#define for_each_worker_pool(pool, gcwq) \
+ for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
#define for_each_busy_worker(worker, i, pos, gcwq) \
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++) \
hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing; /* W: have wqs started freezing? */
* try_to_wake_up(). Put it in a separate cacheline.
*/
static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);

/*
* Global cpu workqueue and nr_running counter for unbound gcwq. The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
* workers have WORKER_UNBOUND set.
*/
static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0); /* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+ [0 ... NR_WORKER_POOLS - 1] = ATOMIC_INIT(0), /* always 0 */
+};

static int worker_thread(void *__worker);

@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
int cpu = pool->gcwq->cpu;
+ atomic_t (*nr_running)[NR_WORKER_POOLS];

if (cpu != WORK_CPU_UNBOUND)
- return &per_cpu(gcwq_nr_running, cpu);
+ nr_running = &per_cpu(pool_nr_running, cpu);
else
- return &unbound_gcwq_nr_running;
+ nr_running = &unbound_pool_nr_running;
+
+ return nr_running[0];
}

static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
__ret1 < 0 ? -1 : 0; \
})

+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+ struct worker_pool *pool;
+
+ for_each_worker_pool(pool, gcwq)
+ if (pool->flags & POOL_MANAGING_WORKERS)
+ return true;
+ return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+ struct worker_pool *pool;
+
+ for_each_worker_pool(pool, gcwq)
+ if (!list_empty(&pool->idle_list))
+ return true;
+ return false;
+}
+
static int __cpuinit trustee_thread(void *__gcwq)
{
struct global_cwq *gcwq = __gcwq;
+ struct worker_pool *pool;
struct worker *worker;
struct work_struct *work;
struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
* cancelled.
*/
BUG_ON(gcwq->cpu != smp_processor_id());
- rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+ rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
BUG_ON(rc < 0);

- gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+ for_each_worker_pool(pool, gcwq) {
+ pool->flags |= POOL_MANAGING_WORKERS;

- list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
- worker->flags |= WORKER_ROGUE;
+ list_for_each_entry(worker, &pool->idle_list, entry)
+ worker->flags |= WORKER_ROGUE;
+ }

for_each_busy_worker(worker, i, pos, gcwq)
worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
* keep_working() are always true as long as the worklist is
* not empty.
*/
- atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+ for_each_worker_pool(pool, gcwq)
+ atomic_set(get_pool_nr_running(pool), 0);

spin_unlock_irq(&gcwq->lock);
- del_timer_sync(&gcwq->pool.idle_timer);
+ for_each_worker_pool(pool, gcwq)
+ del_timer_sync(&pool->idle_timer);
spin_lock_irq(&gcwq->lock);

/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
* may be frozen works in freezable cwqs. Don't declare
* completion while frozen.
*/
- while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
- gcwq->flags & GCWQ_FREEZING ||
- gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
- int nr_works = 0;
+ while (true) {
+ bool busy = false;

- list_for_each_entry(work, &gcwq->pool.worklist, entry) {
- send_mayday(work);
- nr_works++;
- }
+ for_each_worker_pool(pool, gcwq)
+ busy |= pool->nr_workers != pool->nr_idle;

- list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
- if (!nr_works--)
- break;
- wake_up_process(worker->task);
- }
+ if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+ gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+ break;

- if (need_to_create_worker(&gcwq->pool)) {
- spin_unlock_irq(&gcwq->lock);
- worker = create_worker(&gcwq->pool, false);
- spin_lock_irq(&gcwq->lock);
- if (worker) {
- worker->flags |= WORKER_ROGUE;
- start_worker(worker);
+ for_each_worker_pool(pool, gcwq) {
+ int nr_works = 0;
+
+ list_for_each_entry(work, &pool->worklist, entry) {
+ send_mayday(work);
+ nr_works++;
+ }
+
+ list_for_each_entry(worker, &pool->idle_list, entry) {
+ if (!nr_works--)
+ break;
+ wake_up_process(worker->task);
+ }
+
+ if (need_to_create_worker(pool)) {
+ spin_unlock_irq(&gcwq->lock);
+ worker = create_worker(pool, false);
+ spin_lock_irq(&gcwq->lock);
+ if (worker) {
+ worker->flags |= WORKER_ROGUE;
+ start_worker(worker);
+ }
}
}

@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
* all workers till we're canceled.
*/
do {
- rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
- while (!list_empty(&gcwq->pool.idle_list))
- destroy_worker(list_first_entry(&gcwq->pool.idle_list,
- struct worker, entry));
- } while (gcwq->pool.nr_workers && rc >= 0);
+ rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ while (!list_empty(&pool->idle_list)) {
+ worker = list_first_entry(&pool->idle_list,
+ struct worker, entry);
+ destroy_worker(worker);
+ }
+ i |= pool->nr_workers;
+ }
+ } while (i && rc >= 0);

/*
* At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
* Tell the remaining busy ones to rebind once it finishes the
* currently scheduled works by scheduling the rebind_work.
*/
- WARN_ON(!list_empty(&gcwq->pool.idle_list));
+ for_each_worker_pool(pool, gcwq)
+ WARN_ON(!list_empty(&pool->idle_list));

for_each_busy_worker(worker, i, pos, gcwq) {
struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
}

/* relinquish manager role */
- gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+ for_each_worker_pool(pool, gcwq)
+ pool->flags &= ~POOL_MANAGING_WORKERS;

/* notify completion */
gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
unsigned int cpu = (unsigned long)hcpu;
struct global_cwq *gcwq = get_gcwq(cpu);
struct task_struct *new_trustee = NULL;
- struct worker *uninitialized_var(new_worker);
+ struct worker *new_workers[NR_WORKER_POOLS] = { };
+ struct worker_pool *pool;
unsigned long flags;
+ int i;

action &= ~CPU_TASKS_FROZEN;

@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
kthread_bind(new_trustee, cpu);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->pool.first_idle);
- new_worker = create_worker(&gcwq->pool, false);
- if (!new_worker) {
- if (new_trustee)
- kthread_stop(new_trustee);
- return NOTIFY_BAD;
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ BUG_ON(pool->first_idle);
+ new_workers[i] = create_worker(pool, false);
+ if (!new_workers[i++])
+ goto err_destroy;
}
}

@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->pool.first_idle);
- gcwq->pool.first_idle = new_worker;
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ BUG_ON(pool->first_idle);
+ pool->first_idle = new_workers[i++];
+ }
break;

case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
gcwq->trustee_state = TRUSTEE_BUTCHER;
/* fall through */
case CPU_UP_CANCELED:
- destroy_worker(gcwq->pool.first_idle);
- gcwq->pool.first_idle = NULL;
+ for_each_worker_pool(pool, gcwq) {
+ destroy_worker(pool->first_idle);
+ pool->first_idle = NULL;
+ }
break;

case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
* Put the first_idle in and request a real manager to
* take a look.
*/
- spin_unlock_irq(&gcwq->lock);
- kthread_bind(gcwq->pool.first_idle->task, cpu);
- spin_lock_irq(&gcwq->lock);
- gcwq->pool.flags |= POOL_MANAGE_WORKERS;
- start_worker(gcwq->pool.first_idle);
- gcwq->pool.first_idle = NULL;
+ for_each_worker_pool(pool, gcwq) {
+ spin_unlock_irq(&gcwq->lock);
+ kthread_bind(pool->first_idle->task, cpu);
+ spin_lock_irq(&gcwq->lock);
+ pool->flags |= POOL_MANAGE_WORKERS;
+ start_worker(pool->first_idle);
+ pool->first_idle = NULL;
+ }
break;
}

spin_unlock_irqrestore(&gcwq->lock, flags);

return notifier_from_errno(0);
+
+err_destroy:
+ if (new_trustee)
+ kthread_stop(new_trustee);
+
+ spin_lock_irqsave(&gcwq->lock, flags);
+ for (i = 0; i < NR_WORKER_POOLS; i++)
+ if (new_workers[i])
+ destroy_worker(new_workers[i]);
+ spin_unlock_irqrestore(&gcwq->lock, flags);
+
+ return NOTIFY_BAD;
}

#ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)

for_each_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool;
struct workqueue_struct *wq;

spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
cwq_activate_first_delayed(cwq);
}

- wake_up_worker(&gcwq->pool);
+ for_each_worker_pool(pool, gcwq)
+ wake_up_worker(pool);

spin_unlock_irq(&gcwq->lock);
}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
/* initialize gcwqs */
for_each_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool;

spin_lock_init(&gcwq->lock);
- gcwq->pool.gcwq = gcwq;
- INIT_LIST_HEAD(&gcwq->pool.worklist);
gcwq->cpu = cpu;
gcwq->flags |= GCWQ_DISASSOCIATED;

- INIT_LIST_HEAD(&gcwq->pool.idle_list);
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
INIT_HLIST_HEAD(&gcwq->busy_hash[i]);

- init_timer_deferrable(&gcwq->pool.idle_timer);
- gcwq->pool.idle_timer.function = idle_worker_timeout;
- gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+ for_each_worker_pool(pool, gcwq) {
+ pool->gcwq = gcwq;
+ INIT_LIST_HEAD(&pool->worklist);
+ INIT_LIST_HEAD(&pool->idle_list);
+
+ init_timer_deferrable(&pool->idle_timer);
+ pool->idle_timer.function = idle_worker_timeout;
+ pool->idle_timer.data = (unsigned long)pool;

- setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
- (unsigned long)&gcwq->pool);
+ setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+ (unsigned long)pool);

- ida_init(&gcwq->pool.worker_ida);
+ ida_init(&pool->worker_ida);
+ }

gcwq->trustee_state = TRUSTEE_DONE;
init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
/* create the initial worker */
for_each_online_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
- struct worker *worker;
+ struct worker_pool *pool;

if (cpu != WORK_CPU_UNBOUND)
gcwq->flags &= ~GCWQ_DISASSOCIATED;
- worker = create_worker(&gcwq->pool, true);
- BUG_ON(!worker);
- spin_lock_irq(&gcwq->lock);
- start_worker(worker);
- spin_unlock_irq(&gcwq->lock);
+
+ for_each_worker_pool(pool, gcwq) {
+ struct worker *worker;
+
+ worker = create_worker(pool, true);
+ BUG_ON(!worker);
+ spin_lock_irq(&gcwq->lock);
+ start_worker(worker);
+ spin_unlock_irq(&gcwq->lock);
+ }
}

system_wq = alloc_workqueue("events", 0, 0);
--
1.7.7.3

2012-07-09 18:41:52

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable

Modify all functions which deal with per-pool properties to pass
around @pool instead of @gcwq or @cpu.

The changes in this patch are mechanical and don't caues any
functional difference. This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <[email protected]>
---
kernel/workqueue.c | 218 ++++++++++++++++++++++++++-------------------------
1 files changed, 111 insertions(+), 107 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bc43a0c..9f82c25 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -471,8 +471,10 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
return &unbound_global_cwq;
}

-static atomic_t *get_gcwq_nr_running(unsigned int cpu)
+static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
+ int cpu = pool->gcwq->cpu;
+
if (cpu != WORK_CPU_UNBOUND)
return &per_cpu(gcwq_nr_running, cpu);
else
@@ -578,10 +580,10 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
* assume that they're being called with gcwq->lock held.
*/

-static bool __need_more_worker(struct global_cwq *gcwq)
+static bool __need_more_worker(struct worker_pool *pool)
{
- return !atomic_read(get_gcwq_nr_running(gcwq->cpu)) ||
- gcwq->flags & GCWQ_HIGHPRI_PENDING;
+ return !atomic_read(get_pool_nr_running(pool)) ||
+ pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
}

/*
@@ -592,45 +594,46 @@ static bool __need_more_worker(struct global_cwq *gcwq)
* function will always return %true for unbound gcwq as long as the
* worklist isn't empty.
*/
-static bool need_more_worker(struct global_cwq *gcwq)
+static bool need_more_worker(struct worker_pool *pool)
{
- return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
+ return !list_empty(&pool->worklist) && __need_more_worker(pool);
}

/* Can I start working? Called from busy but !running workers. */
-static bool may_start_working(struct global_cwq *gcwq)
+static bool may_start_working(struct worker_pool *pool)
{
- return gcwq->pool.nr_idle;
+ return pool->nr_idle;
}

/* Do I need to keep working? Called from currently running workers. */
-static bool keep_working(struct global_cwq *gcwq)
+static bool keep_working(struct worker_pool *pool)
{
- atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+ atomic_t *nr_running = get_pool_nr_running(pool);

- return !list_empty(&gcwq->pool.worklist) &&
+ return !list_empty(&pool->worklist) &&
(atomic_read(nr_running) <= 1 ||
- gcwq->flags & GCWQ_HIGHPRI_PENDING);
+ pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
}

/* Do we need a new worker? Called from manager. */
-static bool need_to_create_worker(struct global_cwq *gcwq)
+static bool need_to_create_worker(struct worker_pool *pool)
{
- return need_more_worker(gcwq) && !may_start_working(gcwq);
+ return need_more_worker(pool) && !may_start_working(pool);
}

/* Do I need to be the manager? */
-static bool need_to_manage_workers(struct global_cwq *gcwq)
+static bool need_to_manage_workers(struct worker_pool *pool)
{
- return need_to_create_worker(gcwq) || gcwq->flags & GCWQ_MANAGE_WORKERS;
+ return need_to_create_worker(pool) ||
+ pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
}

/* Do we have too many workers and should some go away? */
-static bool too_many_workers(struct global_cwq *gcwq)
+static bool too_many_workers(struct worker_pool *pool)
{
- bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
- int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
- int nr_busy = gcwq->pool.nr_workers - nr_idle;
+ bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+ int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
+ int nr_busy = pool->nr_workers - nr_idle;

return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
}
@@ -640,26 +643,26 @@ static bool too_many_workers(struct global_cwq *gcwq)
*/

/* Return the first worker. Safe with preemption disabled */
-static struct worker *first_worker(struct global_cwq *gcwq)
+static struct worker *first_worker(struct worker_pool *pool)
{
- if (unlikely(list_empty(&gcwq->pool.idle_list)))
+ if (unlikely(list_empty(&pool->idle_list)))
return NULL;

- return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
+ return list_first_entry(&pool->idle_list, struct worker, entry);
}

/**
* wake_up_worker - wake up an idle worker
- * @gcwq: gcwq to wake worker for
+ * @pool: worker pool to wake worker from
*
- * Wake up the first idle worker of @gcwq.
+ * Wake up the first idle worker of @pool.
*
* CONTEXT:
* spin_lock_irq(gcwq->lock).
*/
-static void wake_up_worker(struct global_cwq *gcwq)
+static void wake_up_worker(struct worker_pool *pool)
{
- struct worker *worker = first_worker(gcwq);
+ struct worker *worker = first_worker(pool);

if (likely(worker))
wake_up_process(worker->task);
@@ -681,7 +684,7 @@ void wq_worker_waking_up(struct task_struct *task, unsigned int cpu)
struct worker *worker = kthread_data(task);

if (!(worker->flags & WORKER_NOT_RUNNING))
- atomic_inc(get_gcwq_nr_running(cpu));
+ atomic_inc(get_pool_nr_running(worker->pool));
}

/**
@@ -704,8 +707,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
{
struct worker *worker = kthread_data(task), *to_wakeup = NULL;
struct worker_pool *pool = worker->pool;
- struct global_cwq *gcwq = pool->gcwq;
- atomic_t *nr_running = get_gcwq_nr_running(cpu);
+ atomic_t *nr_running = get_pool_nr_running(pool);

if (worker->flags & WORKER_NOT_RUNNING)
return NULL;
@@ -725,7 +727,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
* without gcwq lock is safe.
*/
if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
- to_wakeup = first_worker(gcwq);
+ to_wakeup = first_worker(pool);
return to_wakeup ? to_wakeup->task : NULL;
}

@@ -746,7 +748,6 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
bool wakeup)
{
struct worker_pool *pool = worker->pool;
- struct global_cwq *gcwq = pool->gcwq;

WARN_ON_ONCE(worker->task != current);

@@ -757,12 +758,12 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
*/
if ((flags & WORKER_NOT_RUNNING) &&
!(worker->flags & WORKER_NOT_RUNNING)) {
- atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+ atomic_t *nr_running = get_pool_nr_running(pool);

if (wakeup) {
if (atomic_dec_and_test(nr_running) &&
!list_empty(&pool->worklist))
- wake_up_worker(gcwq);
+ wake_up_worker(pool);
} else
atomic_dec(nr_running);
}
@@ -782,7 +783,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
*/
static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
{
- struct global_cwq *gcwq = worker->pool->gcwq;
+ struct worker_pool *pool = worker->pool;
unsigned int oflags = worker->flags;

WARN_ON_ONCE(worker->task != current);
@@ -796,7 +797,7 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
*/
if ((flags & WORKER_NOT_RUNNING) && (oflags & WORKER_NOT_RUNNING))
if (!(worker->flags & WORKER_NOT_RUNNING))
- atomic_inc(get_gcwq_nr_running(gcwq->cpu));
+ atomic_inc(get_pool_nr_running(pool));
}

/**
@@ -880,15 +881,15 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
}

/**
- * gcwq_determine_ins_pos - find insertion position
- * @gcwq: gcwq of interest
+ * pool_determine_ins_pos - find insertion position
+ * @pool: pool of interest
* @cwq: cwq a work is being queued for
*
- * A work for @cwq is about to be queued on @gcwq, determine insertion
+ * A work for @cwq is about to be queued on @pool, determine insertion
* position for the work. If @cwq is for HIGHPRI wq, the work is
* queued at the head of the queue but in FIFO order with respect to
* other HIGHPRI works; otherwise, at the end of the queue. This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @gcwq that
+ * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
* there are HIGHPRI works pending.
*
* CONTEXT:
@@ -897,22 +898,22 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
* RETURNS:
* Pointer to inserstion position.
*/
-static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
+static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
struct cpu_workqueue_struct *cwq)
{
struct work_struct *twork;

if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
- return &gcwq->pool.worklist;
+ return &pool->worklist;

- list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
+ list_for_each_entry(twork, &pool->worklist, entry) {
struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);

if (!(tcwq->wq->flags & WQ_HIGHPRI))
break;
}

- gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+ pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
return &twork->entry;
}

@@ -933,7 +934,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
struct work_struct *work, struct list_head *head,
unsigned int extra_flags)
{
- struct global_cwq *gcwq = cwq->pool->gcwq;
+ struct worker_pool *pool = cwq->pool;

/* we own @work, set data and link */
set_work_cwq(work, cwq, extra_flags);
@@ -953,8 +954,8 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
*/
smp_mb();

- if (__need_more_worker(gcwq))
- wake_up_worker(gcwq);
+ if (__need_more_worker(pool))
+ wake_up_worker(pool);
}

/*
@@ -1056,7 +1057,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
if (likely(cwq->nr_active < cwq->max_active)) {
trace_workqueue_activate_work(work);
cwq->nr_active++;
- worklist = gcwq_determine_ins_pos(gcwq, cwq);
+ worklist = pool_determine_ins_pos(cwq->pool, cwq);
} else {
work_flags |= WORK_STRUCT_DELAYED;
worklist = &cwq->delayed_works;
@@ -1221,7 +1222,7 @@ static void worker_enter_idle(struct worker *worker)
list_add(&worker->entry, &pool->idle_list);

if (likely(!(worker->flags & WORKER_ROGUE))) {
- if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+ if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
mod_timer(&pool->idle_timer,
jiffies + IDLE_WORKER_TIMEOUT);
} else
@@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
*/
WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
pool->nr_workers == pool->nr_idle &&
- atomic_read(get_gcwq_nr_running(gcwq->cpu)));
+ atomic_read(get_pool_nr_running(pool)));
}

/**
@@ -1356,10 +1357,10 @@ static struct worker *alloc_worker(void)

/**
* create_worker - create a new workqueue worker
- * @gcwq: gcwq the new worker will belong to
+ * @pool: pool the new worker will belong to
* @bind: whether to set affinity to @cpu or not
*
- * Create a new worker which is bound to @gcwq. The returned worker
+ * Create a new worker which is bound to @pool. The returned worker
* can be started by calling start_worker() or destroyed using
* destroy_worker().
*
@@ -1369,10 +1370,10 @@ static struct worker *alloc_worker(void)
* RETURNS:
* Pointer to the newly created worker.
*/
-static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
+static struct worker *create_worker(struct worker_pool *pool, bool bind)
{
+ struct global_cwq *gcwq = pool->gcwq;
bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
- struct worker_pool *pool = &gcwq->pool;
struct worker *worker = NULL;
int id = -1;

@@ -1480,27 +1481,27 @@ static void destroy_worker(struct worker *worker)
ida_remove(&pool->worker_ida, id);
}

-static void idle_worker_timeout(unsigned long __gcwq)
+static void idle_worker_timeout(unsigned long __pool)
{
- struct global_cwq *gcwq = (void *)__gcwq;
+ struct worker_pool *pool = (void *)__pool;
+ struct global_cwq *gcwq = pool->gcwq;

spin_lock_irq(&gcwq->lock);

- if (too_many_workers(gcwq)) {
+ if (too_many_workers(pool)) {
struct worker *worker;
unsigned long expires;

/* idle_list is kept in LIFO order, check the last one */
- worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
- entry);
+ worker = list_entry(pool->idle_list.prev, struct worker, entry);
expires = worker->last_active + IDLE_WORKER_TIMEOUT;

if (time_before(jiffies, expires))
- mod_timer(&gcwq->pool.idle_timer, expires);
+ mod_timer(&pool->idle_timer, expires);
else {
/* it's been idle for too long, wake up manager */
gcwq->flags |= GCWQ_MANAGE_WORKERS;
- wake_up_worker(gcwq);
+ wake_up_worker(pool);
}
}

@@ -1526,37 +1527,38 @@ static bool send_mayday(struct work_struct *work)
return true;
}

-static void gcwq_mayday_timeout(unsigned long __gcwq)
+static void gcwq_mayday_timeout(unsigned long __pool)
{
- struct global_cwq *gcwq = (void *)__gcwq;
+ struct worker_pool *pool = (void *)__pool;
+ struct global_cwq *gcwq = pool->gcwq;
struct work_struct *work;

spin_lock_irq(&gcwq->lock);

- if (need_to_create_worker(gcwq)) {
+ if (need_to_create_worker(pool)) {
/*
* We've been trying to create a new worker but
* haven't been successful. We might be hitting an
* allocation deadlock. Send distress signals to
* rescuers.
*/
- list_for_each_entry(work, &gcwq->pool.worklist, entry)
+ list_for_each_entry(work, &pool->worklist, entry)
send_mayday(work);
}

spin_unlock_irq(&gcwq->lock);

- mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
+ mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
}

/**
* maybe_create_worker - create a new worker if necessary
- * @gcwq: gcwq to create a new worker for
+ * @pool: pool to create a new worker for
*
- * Create a new worker for @gcwq if necessary. @gcwq is guaranteed to
+ * Create a new worker for @pool if necessary. @pool is guaranteed to
* have at least one idle worker on return from this function. If
* creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
- * sent to all rescuers with works scheduled on @gcwq to resolve
+ * sent to all rescuers with works scheduled on @pool to resolve
* possible allocation deadlock.
*
* On return, need_to_create_worker() is guaranteed to be false and
@@ -1571,52 +1573,54 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
* false if no action was taken and gcwq->lock stayed locked, true
* otherwise.
*/
-static bool maybe_create_worker(struct global_cwq *gcwq)
+static bool maybe_create_worker(struct worker_pool *pool)
__releases(&gcwq->lock)
__acquires(&gcwq->lock)
{
- if (!need_to_create_worker(gcwq))
+ struct global_cwq *gcwq = pool->gcwq;
+
+ if (!need_to_create_worker(pool))
return false;
restart:
spin_unlock_irq(&gcwq->lock);

/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
- mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+ mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);

while (true) {
struct worker *worker;

- worker = create_worker(gcwq, true);
+ worker = create_worker(pool, true);
if (worker) {
- del_timer_sync(&gcwq->pool.mayday_timer);
+ del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&gcwq->lock);
start_worker(worker);
- BUG_ON(need_to_create_worker(gcwq));
+ BUG_ON(need_to_create_worker(pool));
return true;
}

- if (!need_to_create_worker(gcwq))
+ if (!need_to_create_worker(pool))
break;

__set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(CREATE_COOLDOWN);

- if (!need_to_create_worker(gcwq))
+ if (!need_to_create_worker(pool))
break;
}

- del_timer_sync(&gcwq->pool.mayday_timer);
+ del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&gcwq->lock);
- if (need_to_create_worker(gcwq))
+ if (need_to_create_worker(pool))
goto restart;
return true;
}

/**
* maybe_destroy_worker - destroy workers which have been idle for a while
- * @gcwq: gcwq to destroy workers for
+ * @pool: pool to destroy workers for
*
- * Destroy @gcwq workers which have been idle for longer than
+ * Destroy @pool workers which have been idle for longer than
* IDLE_WORKER_TIMEOUT.
*
* LOCKING:
@@ -1627,20 +1631,19 @@ restart:
* false if no action was taken and gcwq->lock stayed locked, true
* otherwise.
*/
-static bool maybe_destroy_workers(struct global_cwq *gcwq)
+static bool maybe_destroy_workers(struct worker_pool *pool)
{
bool ret = false;

- while (too_many_workers(gcwq)) {
+ while (too_many_workers(pool)) {
struct worker *worker;
unsigned long expires;

- worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
- entry);
+ worker = list_entry(pool->idle_list.prev, struct worker, entry);
expires = worker->last_active + IDLE_WORKER_TIMEOUT;

if (time_before(jiffies, expires)) {
- mod_timer(&gcwq->pool.idle_timer, expires);
+ mod_timer(&pool->idle_timer, expires);
break;
}

@@ -1673,7 +1676,8 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
*/
static bool manage_workers(struct worker *worker)
{
- struct global_cwq *gcwq = worker->pool->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
bool ret = false;

if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1686,8 +1690,8 @@ static bool manage_workers(struct worker *worker)
* Destroy and then create so that may_start_working() is true
* on return.
*/
- ret |= maybe_destroy_workers(gcwq);
- ret |= maybe_create_worker(gcwq);
+ ret |= maybe_destroy_workers(pool);
+ ret |= maybe_create_worker(pool);

gcwq->flags &= ~GCWQ_MANAGING_WORKERS;

@@ -1746,7 +1750,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
{
struct work_struct *work = list_first_entry(&cwq->delayed_works,
struct work_struct, entry);
- struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
+ struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);

trace_workqueue_activate_work(work);
move_linked_works(work, pos, NULL);
@@ -1874,7 +1878,7 @@ __acquires(&gcwq->lock)

if (!list_empty(&pool->worklist) &&
get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
- wake_up_worker(gcwq);
+ wake_up_worker(pool);
else
gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
}
@@ -1890,8 +1894,8 @@ __acquires(&gcwq->lock)
* Unbound gcwq isn't concurrency managed and work items should be
* executed ASAP. Wake up another worker if necessary.
*/
- if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
- wake_up_worker(gcwq);
+ if ((worker->flags & WORKER_UNBOUND) && need_more_worker(pool))
+ wake_up_worker(pool);

spin_unlock_irq(&gcwq->lock);

@@ -1983,11 +1987,11 @@ woke_up:
worker_leave_idle(worker);
recheck:
/* no more worker necessary? */
- if (!need_more_worker(gcwq))
+ if (!need_more_worker(pool))
goto sleep;

/* do we need to manage? */
- if (unlikely(!may_start_working(gcwq)) && manage_workers(worker))
+ if (unlikely(!may_start_working(pool)) && manage_workers(worker))
goto recheck;

/*
@@ -2018,11 +2022,11 @@ recheck:
move_linked_works(work, &worker->scheduled, NULL);
process_scheduled_works(worker);
}
- } while (keep_working(gcwq));
+ } while (keep_working(pool));

worker_set_flags(worker, WORKER_PREP, false);
sleep:
- if (unlikely(need_to_manage_workers(gcwq)) && manage_workers(worker))
+ if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
goto recheck;

/*
@@ -2107,8 +2111,8 @@ repeat:
* regular worker; otherwise, we end up with 0 concurrency
* and stalling the execution.
*/
- if (keep_working(gcwq))
- wake_up_worker(gcwq);
+ if (keep_working(pool))
+ wake_up_worker(pool);

spin_unlock_irq(&gcwq->lock);
}
@@ -3383,7 +3387,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
* keep_working() are always true as long as the worklist is
* not empty.
*/
- atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
+ atomic_set(get_pool_nr_running(&gcwq->pool), 0);

spin_unlock_irq(&gcwq->lock);
del_timer_sync(&gcwq->pool.idle_timer);
@@ -3424,9 +3428,9 @@ static int __cpuinit trustee_thread(void *__gcwq)
wake_up_process(worker->task);
}

- if (need_to_create_worker(gcwq)) {
+ if (need_to_create_worker(&gcwq->pool)) {
spin_unlock_irq(&gcwq->lock);
- worker = create_worker(gcwq, false);
+ worker = create_worker(&gcwq->pool, false);
spin_lock_irq(&gcwq->lock);
if (worker) {
worker->flags |= WORKER_ROGUE;
@@ -3540,7 +3544,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
/* fall through */
case CPU_UP_PREPARE:
BUG_ON(gcwq->pool.first_idle);
- new_worker = create_worker(gcwq, false);
+ new_worker = create_worker(&gcwq->pool, false);
if (!new_worker) {
if (new_trustee)
kthread_stop(new_trustee);
@@ -3788,7 +3792,7 @@ void thaw_workqueues(void)
cwq_activate_first_delayed(cwq);
}

- wake_up_worker(gcwq);
+ wake_up_worker(&gcwq->pool);

spin_unlock_irq(&gcwq->lock);
}
@@ -3822,10 +3826,10 @@ static int __init init_workqueues(void)

init_timer_deferrable(&gcwq->pool.idle_timer);
gcwq->pool.idle_timer.function = idle_worker_timeout;
- gcwq->pool.idle_timer.data = (unsigned long)gcwq;
+ gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;

setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
- (unsigned long)gcwq);
+ (unsigned long)&gcwq->pool);

ida_init(&gcwq->pool.worker_ida);

@@ -3840,7 +3844,7 @@ static int __init init_workqueues(void)

if (cpu != WORK_CPU_UNBOUND)
gcwq->flags &= ~GCWQ_DISASSOCIATED;
- worker = create_worker(gcwq, true);
+ worker = create_worker(&gcwq->pool, true);
BUG_ON(!worker);
spin_lock_irq(&gcwq->lock);
start_worker(worker);
--
1.7.7.3

2012-07-09 18:41:53

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 4/6] workqueue: separate out worker_pool flags

GCWQ_MANAGE_WORKERS, GCWQ_MANAGING_WORKERS and GCWQ_HIGHPRI_PENDING
are per-pool properties. Add worker_pool->flags and make the above
three flags per-pool flags.

The changes in this patch are mechanical and don't caues any
functional difference. This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <[email protected]>
---
kernel/workqueue.c | 47 +++++++++++++++++++++++++----------------------
1 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9f82c25..e700dcc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -46,11 +46,13 @@

enum {
/* global_cwq flags */
- GCWQ_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
- GCWQ_MANAGING_WORKERS = 1 << 1, /* managing workers */
- GCWQ_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
- GCWQ_FREEZING = 1 << 3, /* freeze in progress */
- GCWQ_HIGHPRI_PENDING = 1 << 4, /* highpri works on queue */
+ GCWQ_DISASSOCIATED = 1 << 0, /* cpu can't serve workers */
+ GCWQ_FREEZING = 1 << 1, /* freeze in progress */
+
+ /* pool flags */
+ POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
+ POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
+ POOL_HIGHPRI_PENDING = 1 << 2, /* highpri works on queue */

/* worker flags */
WORKER_STARTED = 1 << 0, /* started */
@@ -142,6 +144,7 @@ struct worker {

struct worker_pool {
struct global_cwq *gcwq; /* I: the owning gcwq */
+ unsigned int flags; /* X: flags */

struct list_head worklist; /* L: list of pending works */
int nr_workers; /* L: total number of workers */
@@ -583,7 +586,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
static bool __need_more_worker(struct worker_pool *pool)
{
return !atomic_read(get_pool_nr_running(pool)) ||
- pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
+ (pool->flags & POOL_HIGHPRI_PENDING);
}

/*
@@ -612,7 +615,7 @@ static bool keep_working(struct worker_pool *pool)

return !list_empty(&pool->worklist) &&
(atomic_read(nr_running) <= 1 ||
- pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
+ (pool->flags & POOL_HIGHPRI_PENDING));
}

/* Do we need a new worker? Called from manager. */
@@ -625,13 +628,13 @@ static bool need_to_create_worker(struct worker_pool *pool)
static bool need_to_manage_workers(struct worker_pool *pool)
{
return need_to_create_worker(pool) ||
- pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
+ (pool->flags & POOL_MANAGE_WORKERS);
}

/* Do we have too many workers and should some go away? */
static bool too_many_workers(struct worker_pool *pool)
{
- bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+ bool managing = pool->flags & POOL_MANAGING_WORKERS;
int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
int nr_busy = pool->nr_workers - nr_idle;

@@ -889,7 +892,7 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
* position for the work. If @cwq is for HIGHPRI wq, the work is
* queued at the head of the queue but in FIFO order with respect to
* other HIGHPRI works; otherwise, at the end of the queue. This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
+ * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
* there are HIGHPRI works pending.
*
* CONTEXT:
@@ -913,7 +916,7 @@ static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
break;
}

- pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+ pool->flags |= POOL_HIGHPRI_PENDING;
return &twork->entry;
}

@@ -1500,7 +1503,7 @@ static void idle_worker_timeout(unsigned long __pool)
mod_timer(&pool->idle_timer, expires);
else {
/* it's been idle for too long, wake up manager */
- gcwq->flags |= GCWQ_MANAGE_WORKERS;
+ pool->flags |= POOL_MANAGE_WORKERS;
wake_up_worker(pool);
}
}
@@ -1680,11 +1683,11 @@ static bool manage_workers(struct worker *worker)
struct global_cwq *gcwq = pool->gcwq;
bool ret = false;

- if (gcwq->flags & GCWQ_MANAGING_WORKERS)
+ if (pool->flags & POOL_MANAGING_WORKERS)
return ret;

- gcwq->flags &= ~GCWQ_MANAGE_WORKERS;
- gcwq->flags |= GCWQ_MANAGING_WORKERS;
+ pool->flags &= ~POOL_MANAGE_WORKERS;
+ pool->flags |= POOL_MANAGING_WORKERS;

/*
* Destroy and then create so that may_start_working() is true
@@ -1693,7 +1696,7 @@ static bool manage_workers(struct worker *worker)
ret |= maybe_destroy_workers(pool);
ret |= maybe_create_worker(pool);

- gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+ pool->flags &= ~POOL_MANAGING_WORKERS;

/*
* The trustee might be waiting to take over the manager
@@ -1872,7 +1875,7 @@ __acquires(&gcwq->lock)
* If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
* wake up another worker; otherwise, clear HIGHPRI_PENDING.
*/
- if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
+ if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
struct work_struct *nwork = list_first_entry(&pool->worklist,
struct work_struct, entry);

@@ -1880,7 +1883,7 @@ __acquires(&gcwq->lock)
get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
wake_up_worker(pool);
else
- gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
+ pool->flags &= ~POOL_HIGHPRI_PENDING;
}

/*
@@ -3360,10 +3363,10 @@ static int __cpuinit trustee_thread(void *__gcwq)
* cancelled.
*/
BUG_ON(gcwq->cpu != smp_processor_id());
- rc = trustee_wait_event(!(gcwq->flags & GCWQ_MANAGING_WORKERS));
+ rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
BUG_ON(rc < 0);

- gcwq->flags |= GCWQ_MANAGING_WORKERS;
+ gcwq->pool.flags |= POOL_MANAGING_WORKERS;

list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
worker->flags |= WORKER_ROGUE;
@@ -3487,7 +3490,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
}

/* relinquish manager role */
- gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+ gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;

/* notify completion */
gcwq->trustee = NULL;
@@ -3604,7 +3607,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
spin_unlock_irq(&gcwq->lock);
kthread_bind(gcwq->pool.first_idle->task, cpu);
spin_lock_irq(&gcwq->lock);
- gcwq->flags |= GCWQ_MANAGE_WORKERS;
+ gcwq->pool.flags |= POOL_MANAGE_WORKERS;
start_worker(gcwq->pool.first_idle);
gcwq->pool.first_idle = NULL;
break;
--
1.7.7.3

2012-07-10 04:53:21

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq

Hi, Tejun

Just nitpicks..


On Mon, 9 Jul 2012 11:41:51 -0700, Tejun Heo wrote:
> Move worklist and all worker management fields from global_cwq into
> the new struct worker_pool. worker_pool points back to the containing
> gcwq. worker and cpu_workqueue_struct are updated to point to
> worker_pool instead of gcwq too.
>
> This change is mechanical and doesn't introduce any functional
> difference other than rearranging of fields and an added level of
> indirection in some places. This is to prepare for multiple pools per
> gcwq.
>
> Signed-off-by: Tejun Heo <[email protected]>
> ---
> include/trace/events/workqueue.h | 2 +-
> kernel/workqueue.c | 216 ++++++++++++++++++++-----------------
> 2 files changed, 118 insertions(+), 100 deletions(-)
>
> diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
> index 4018f50..f28d1b6 100644
> --- a/include/trace/events/workqueue.h
> +++ b/include/trace/events/workqueue.h
> @@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
> __entry->function = work->func;
> __entry->workqueue = cwq->wq;
> __entry->req_cpu = req_cpu;
> - __entry->cpu = cwq->gcwq->cpu;
> + __entry->cpu = cwq->pool->gcwq->cpu;
> ),
>
> TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 27637c2..bc43a0c 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -115,6 +115,7 @@ enum {
> */
>
> struct global_cwq;
> +struct worker_pool;
>
> /*
> * The poor guys doing the actual heavy lifting. All on-duty workers
> @@ -131,7 +132,7 @@ struct worker {
> struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
> struct list_head scheduled; /* L: scheduled works */
> struct task_struct *task; /* I: worker task */
> - struct global_cwq *gcwq; /* I: the associated gcwq */
> + struct worker_pool *pool; /* I: the associated pool */
> /* 64 bytes boundary on 64bit, 32 on 32bit */
> unsigned long last_active; /* L: last active timestamp */
> unsigned int flags; /* X: flags */
> @@ -139,6 +140,21 @@ struct worker {
> struct work_struct rebind_work; /* L: rebind worker to cpu */
> };
>
> +struct worker_pool {
> + struct global_cwq *gcwq; /* I: the owning gcwq */
> +
> + struct list_head worklist; /* L: list of pending works */
> + int nr_workers; /* L: total number of workers */
> + int nr_idle; /* L: currently idle ones */
> +
> + struct list_head idle_list; /* X: list of idle workers */
> + struct timer_list idle_timer; /* L: worker idle timeout */
> + struct timer_list mayday_timer; /* L: SOS timer for dworkers */

What is 'dworkers'?


> +
> + struct ida worker_ida; /* L: for worker IDs */
> + struct worker *first_idle; /* L: first idle worker */
> +};
> +
> /*
> * Global per-cpu workqueue. There's one and only one for each cpu
> * and all works are queued and processed here regardless of their
> @@ -146,27 +162,18 @@ struct worker {
> */
> struct global_cwq {
> spinlock_t lock; /* the gcwq lock */
> - struct list_head worklist; /* L: list of pending works */
> unsigned int cpu; /* I: the associated cpu */
> unsigned int flags; /* L: GCWQ_* flags */
>
> - int nr_workers; /* L: total number of workers */
> - int nr_idle; /* L: currently idle ones */
> -
> - /* workers are chained either in the idle_list or busy_hash */
> - struct list_head idle_list; /* X: list of idle workers */
> + /* workers are chained either in busy_head or pool idle_list */

s/busy_head/busy_hash/ ?

Thanks,
Namhyung


> struct hlist_head busy_hash[BUSY_WORKER_HASH_SIZE];
> /* L: hash of busy workers */
>
> - struct timer_list idle_timer; /* L: worker idle timeout */
> - struct timer_list mayday_timer; /* L: SOS timer for dworkers */
> -
> - struct ida worker_ida; /* L: for worker IDs */
> + struct worker_pool pool; /* the worker pools */
>
> struct task_struct *trustee; /* L: for gcwq shutdown */
> unsigned int trustee_state; /* L: trustee state */
> wait_queue_head_t trustee_wait; /* trustee wait */
> - struct worker *first_idle; /* L: first idle worker */
> } ____cacheline_aligned_in_smp;
>
> /*

2012-07-10 23:30:36

by Tony Luck

[permalink] [raw]
Subject: Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable

On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <[email protected]> wrote:
> @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
> */
> WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
> pool->nr_workers == pool->nr_idle &&
> - atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> + atomic_read(get_pool_nr_running(pool)));
> }

Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
haven't bisected ... just noticed that two patches in this series tinker
with lines in this check. next-20120706 didn't generate the WARN.

-Tony

Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1217 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
[<a0000001000154e0>] show_stack+0x80/0xa0
sp=e0000040600f7c30 bsp=e0000040600f0da8
[<a000000100d6e870>] dump_stack+0x30/0x50
sp=e0000040600f7e00 bsp=e0000040600f0d90
[<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
sp=e0000040600f7e00 bsp=e0000040600f0d50
[<a000000100073120>] warn_slowpath_null+0x40/0x60
sp=e0000040600f7e00 bsp=e0000040600f0d28
[<a0000001000aaad0>] worker_enter_idle+0x2d0/0x4a0
sp=e0000040600f7e00 bsp=e0000040600f0cf0
[<a0000001000ad020>] worker_thread+0x4a0/0xbe0
sp=e0000040600f7e00 bsp=e0000040600f0c28
[<a0000001000bda70>] kthread+0x110/0x140
sp=e0000040600f7e00 bsp=e0000040600f0be8
[<a000000100013510>] kernel_thread_helper+0x30/0x60
sp=e0000040600f7e30 bsp=e0000040600f0bc0
[<a00000010000a0c0>] start_kernel_thread+0x20/0x40
sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace 9501f2472a75a227 ]---

2012-07-12 17:05:19

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

Hello, Fengguang.

On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> [ 0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> [ 0.207977] Modules linked in:
> [ 0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> [ 0.207977] Call Trace:
> [ 0.207977] [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> [ 0.207977] [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> [ 0.207977] [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> [ 0.207977] [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> [ 0.207977] [<ffffffff81087222>] start_worker+0x26/0x42
> [ 0.207977] [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> [ 0.207977] [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> [ 0.207977] [<ffffffff81000284>] do_one_initcall+0xce/0x272
> [ 0.207977] [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> [ 0.207977] [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> [ 0.207977] [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> [ 0.207977] [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> [ 0.207977] [<ffffffff814b9b70>] ? gs_change+0x13/0x13

Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
nr_running before looking at pool->nr_running. The warning is
spurious. Will post fix soon.

Thanks.

--
tejun

2012-07-12 17:06:24

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable

Hello, Tony.

On Tue, Jul 10, 2012 at 04:30:36PM -0700, Tony Luck wrote:
> On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <[email protected]> wrote:
> > @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
> > */
> > WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
> > pool->nr_workers == pool->nr_idle &&
> > - atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> > + atomic_read(get_pool_nr_running(pool)));
> > }
>
> Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
> haven't bisected ... just noticed that two patches in this series tinker
> with lines in this check. next-20120706 didn't generate the WARN.

Sorry about the delay. The warning is spurious. As now there are
multiple pools, nr_running check should be done before
pool->nr_workers check. Will post fix soon.

Thank you.

--
tejun

2012-07-12 17:07:58

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq

Hello, Namhyung.

Sorry about the delay.

On Tue, Jul 10, 2012 at 01:48:44PM +0900, Namhyung Kim wrote:
> > + struct list_head idle_list; /* X: list of idle workers */
> > + struct timer_list idle_timer; /* L: worker idle timeout */
> > + struct timer_list mayday_timer; /* L: SOS timer for dworkers */
>
> What is 'dworkers'?

My stupid finger pressing 'd' when I never meant to. :)

> > - /* workers are chained either in the idle_list or busy_hash */
> > - struct list_head idle_list; /* X: list of idle workers */
> > + /* workers are chained either in busy_head or pool idle_list */
>
> s/busy_head/busy_hash/ ?

Will fix.

Thanks.

--
tejun

2012-07-12 21:45:19

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

Hello, again.

On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > [ 0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > [ 0.207977] Modules linked in:
> > [ 0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > [ 0.207977] Call Trace:
> > [ 0.207977] [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > [ 0.207977] [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > [ 0.207977] [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > [ 0.207977] [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > [ 0.207977] [<ffffffff81087222>] start_worker+0x26/0x42
> > [ 0.207977] [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > [ 0.207977] [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > [ 0.207977] [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > [ 0.207977] [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > [ 0.207977] [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > [ 0.207977] [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > [ 0.207977] [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > [ 0.207977] [<ffffffff814b9b70>] ? gs_change+0x13/0x13
>
> Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> nr_running before looking at pool->nr_running. The warning is
> spurious. Will post fix soon.

I was wrong and am now dazed and confused. That's from
init_workqueues() where only cpu0 is running. How the hell did
nr_running manage to become non-zero at that point? Can you please
apply the following patch and report the boot log? Thank you.

---
kernel/workqueue.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
{
struct worker *worker = kthread_data(task);

- if (!(worker->flags & WORKER_NOT_RUNNING))
+ if (!(worker->flags & WORKER_NOT_RUNNING)) {
+ WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
atomic_inc(get_pool_nr_running(worker->pool));
+ }
}

/**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s

/* this can only happen on the local cpu */
BUG_ON(cpu != raw_smp_processor_id());
+ WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);

/*
* The counterpart of the following dec_and_test, implied mb,
@@ -3855,6 +3858,10 @@ static int __init init_workqueues(void)
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
INIT_HLIST_HEAD(&gcwq->busy_hash[i]);

+ if (cpu != WORK_CPU_UNBOUND)
+ printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+ per_cpu_ptr(&pool_nr_running, cpu));
+
for_each_worker_pool(pool, gcwq) {
pool->gcwq = gcwq;
INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3875,10 @@ static int __init init_workqueues(void)
(unsigned long)pool);

ida_init(&pool->worker_ida);
+
+ printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+ atomic_read(get_pool_nr_running(pool)),
+ get_pool_nr_running(pool));
}

gcwq->trustee_state = TRUSTEE_DONE;

2012-07-12 22:16:31

by Tony Luck

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <[email protected]> wrote:
> I was wrong and am now dazed and confused. That's from
> init_workqueues() where only cpu0 is running. How the hell did
> nr_running manage to become non-zero at that point? Can you please
> apply the following patch and report the boot log? Thank you.

Patch applied on top of next-20120712 (which still has the same problem).

dmesg output attached

-Tony


Attachments:
dmesg.txt (23.02 kB)

2012-07-12 22:32:26

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

Hello, Tony.

On Thu, Jul 12, 2012 at 03:16:30PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <[email protected]> wrote:
> > I was wrong and am now dazed and confused. That's from
> > init_workqueues() where only cpu0 is running. How the hell did
> > nr_running manage to become non-zero at that point? Can you please
> > apply the following patch and report the boot log? Thank you.
>
> Patch applied on top of next-20120712 (which still has the same problem).

Can you please try the following debug patch instead? Yours is
different from Fengguang's.

Thanks a lot!
---
kernel/workqueue.c | 40 ++++++++++++++++++++++++++++++++++++----
1 file changed, 36 insertions(+), 4 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
{
struct worker *worker = kthread_data(task);

- if (!(worker->flags & WORKER_NOT_RUNNING))
+ if (!(worker->flags & WORKER_NOT_RUNNING)) {
+ WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
atomic_inc(get_pool_nr_running(worker->pool));
+ }
}

/**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s

/* this can only happen on the local cpu */
BUG_ON(cpu != raw_smp_processor_id());
+ WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);

/*
* The counterpart of the following dec_and_test, implied mb,
@@ -1212,9 +1215,30 @@ static void worker_enter_idle(struct wor
* between setting %WORKER_ROGUE and zapping nr_running, the
* warning may trigger spuriously. Check iff trustee is idle.
*/
- WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
- pool->nr_workers == pool->nr_idle &&
- atomic_read(get_pool_nr_running(pool)));
+ if (WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+ pool->nr_workers == pool->nr_idle &&
+ atomic_read(get_pool_nr_running(pool)))) {
+ static bool once = false;
+ int cpu;
+
+ if (once)
+ return;
+ once = true;
+
+ printk("XXX nr_running mismatch on gcwq[%d] pool[%ld]\n",
+ gcwq->cpu, pool - gcwq->pools);
+
+ for_each_gcwq_cpu(cpu) {
+ gcwq = get_gcwq(cpu);
+
+ printk("XXX gcwq[%d] flags=0x%x\n", gcwq->cpu, gcwq->flags);
+ for_each_worker_pool(pool, gcwq)
+ printk("XXX gcwq[%d] pool[%ld] nr_workers=%d nr_idle=%d nr_running=%d\n",
+ gcwq->cpu, pool - gcwq->pools,
+ pool->nr_workers, pool->nr_idle,
+ atomic_read(get_pool_nr_running(pool)));
+ }
+ }
}

/**
@@ -3855,6 +3879,10 @@ static int __init init_workqueues(void)
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
INIT_HLIST_HEAD(&gcwq->busy_hash[i]);

+ if (cpu != WORK_CPU_UNBOUND)
+ printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+ per_cpu_ptr(&pool_nr_running, cpu));
+
for_each_worker_pool(pool, gcwq) {
pool->gcwq = gcwq;
INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3896,10 @@ static int __init init_workqueues(void)
(unsigned long)pool);

ida_init(&pool->worker_ida);
+
+ printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+ atomic_read(get_pool_nr_running(pool)),
+ get_pool_nr_running(pool));
}

gcwq->trustee_state = TRUSTEE_DONE;

2012-07-12 23:24:48

by Tony Luck

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <[email protected]> wrote:
> Can you please try the following debug patch instead? Yours is
> different from Fengguang's.

New dmesg from mext-20120712 + this new patch (instead of previous one)

[Note - I see some XXX traces, but no WARN_ON stack dump this time]

-Tony


Attachments:
dmesg.txt (21.78 kB)

2012-07-12 23:36:45

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

Hello, Tony.

On Thu, Jul 12, 2012 at 04:24:47PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <[email protected]> wrote:
> > Can you please try the following debug patch instead? Yours is
> > different from Fengguang's.
>
> New dmesg from mext-20120712 + this new patch (instead of previous one)
>
> [Note - I see some XXX traces, but no WARN_ON stack dump this time]

The debug patch didn't do anything for the bug itself. I suppose it's
timing dependent and doesn't always happen (it never reproduces here
for some reason). Can you please repeat several times and see whether
the warning can be triggered?

Thank you very much!

--
tejun

2012-07-12 23:46:12

by Tony Luck

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

On Thu, Jul 12, 2012 at 4:36 PM, Tejun Heo <[email protected]> wrote:
> The debug patch didn't do anything for the bug itself. I suppose it's
> timing dependent and doesn't always happen (it never reproduces here
> for some reason). Can you please repeat several times and see whether
> the warning can be triggered?

Still hasn't come back in three reboots. I have to leave now, can continue
tomorrow.

-Tony

2012-07-13 02:08:06

by Fengguang Wu

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

On Thu, Jul 12, 2012 at 02:45:14PM -0700, Tejun Heo wrote:
> Hello, again.
>
> On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> > On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > > [ 0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > > [ 0.207977] Modules linked in:
> > > [ 0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > > [ 0.207977] Call Trace:
> > > [ 0.207977] [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > > [ 0.207977] [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > > [ 0.207977] [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > > [ 0.207977] [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > > [ 0.207977] [<ffffffff81087222>] start_worker+0x26/0x42
> > > [ 0.207977] [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > > [ 0.207977] [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > > [ 0.207977] [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > > [ 0.207977] [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > > [ 0.207977] [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > > [ 0.207977] [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > > [ 0.207977] [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > > [ 0.207977] [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> >
> > Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> > nr_running before looking at pool->nr_running. The warning is
> > spurious. Will post fix soon.
>
> I was wrong and am now dazed and confused. That's from
> init_workqueues() where only cpu0 is running. How the hell did
> nr_running manage to become non-zero at that point? Can you please
> apply the following patch and report the boot log? Thank you.

Tejun, here is the data I got:

[ 0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[ 0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[ 0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[ 0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[ 0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[ 0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[ 0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[ 0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[ 0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[ 0.175416] ------------[ cut here ]------------
[ 0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[ 0.175981] Modules linked in:
[ 0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[ 0.175981] Call Trace:
[ 0.175981] [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[ 0.175981] [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[ 0.175981] [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[ 0.175981] [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[ 0.175981] [<ffffffff810874ee>] start_worker+0x26/0x42
[ 0.175981] [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[ 0.175981] [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[ 0.175981] [<ffffffff81000284>] do_one_initcall+0xce/0x272
[ 0.175981] [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[ 0.175981] [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[ 0.175981] [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[ 0.175981] [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[ 0.175981] [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[ 0.175981] ---[ end trace c22d98677c4d3e37 ]---
[ 0.178091] Testing tracer nop: PASSED

The attached dmesg is not complete because, once get the oops message,
my script will kill the kvm to save time.

Thanks,
Fengguang


Attachments:
(No filename) (3.75 kB)
dmesg-kvm_bisect-waimea-27649-2012-07-13-08-34-35 (91.67 kB)
Download all attachments

2012-07-13 17:51:33

by Tony Luck

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

On Thu, Jul 12, 2012 at 4:46 PM, Tony Luck <[email protected]> wrote:
> Still hasn't come back in three reboots. I have to leave now, can continue
> tomorrow.

Tired of rebooting ... seems that it is very hard to hit this with
this patch :-(

-Tony

2012-07-14 03:41:29

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

Hello,

On Fri, Jul 13, 2012 at 10:08:00AM +0800, Fengguang Wu wrote:
> [ 0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
> [ 0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
> [ 0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
> [ 0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
> [ 0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
> [ 0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
> [ 0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
> [ 0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
> [ 0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438

Heh, I found it. get_pool_nr_running() stores the nr_running array to
use in a local pointer to array and then returns pointer to the
specific element from there depending on the priority.

atomic_t (*nr_running)[NR_WORKER_POOLS];

/* set @nr_running to the array to use */
return nr_running[worker_pool_pri(pool)];

The [] operator in the return statement is indexing to the arrays
instead of the array elements, so if the index is 1, the above
statement offsets nr_running by sizeof(atomic_t [NR_WORKER_POOLS])
instead of sizeof(atomic_t). This should have been
&(*nr_running)[worker_pool_pri(pool)] instead.

So, highpri ends up dereferencing out-of-bounds and depending on
variable layout, it may see garbage value from the beginning (what you
were seeing) or get interfered afterwards (what Tony was seeing).
This also explains why I didn't see it and Tony can no longer
reproduce it after debug patch.

Will post updated patches.

Thank you.

--
tejun

2012-07-14 03:55:38

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()

>From 8a0597bf9939d50039d4a6f446db51cf920daaad Mon Sep 17 00:00:00 2001
From: Tejun Heo <[email protected]>
Date: Fri, 13 Jul 2012 20:50:50 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference. This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Fengguang Wu <[email protected]>
---
git branch updated accordingly. Thanks!

kernel/workqueue.c | 225 ++++++++++++++++++++++++++++++++++++----------------
1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..82eee34 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
TRUSTEE_RELEASE = 3, /* release workers */
TRUSTEE_DONE = 4, /* trustee is done */

+ NR_WORKER_POOLS = 1, /* # worker pools per gcwq */
+
BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
BUSY_WORKER_HASH_SIZE = 1 << BUSY_WORKER_HASH_ORDER,
BUSY_WORKER_HASH_MASK = BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
#define CREATE_TRACE_POINTS
#include <trace/events/workqueue.h>

+#define for_each_worker_pool(pool, gcwq) \
+ for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
#define for_each_busy_worker(worker, i, pos, gcwq) \
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++) \
hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing; /* W: have wqs started freezing? */
* try_to_wake_up(). Put it in a separate cacheline.
*/
static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);

/*
* Global cpu workqueue and nr_running counter for unbound gcwq. The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
* workers have WORKER_UNBOUND set.
*/
static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0); /* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+ [0 ... NR_WORKER_POOLS - 1] = ATOMIC_INIT(0), /* always 0 */
+};

static int worker_thread(void *__worker);

@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
int cpu = pool->gcwq->cpu;
+ atomic_t (*nr_running)[NR_WORKER_POOLS];

if (cpu != WORK_CPU_UNBOUND)
- return &per_cpu(gcwq_nr_running, cpu);
+ nr_running = &per_cpu(pool_nr_running, cpu);
else
- return &unbound_gcwq_nr_running;
+ nr_running = &unbound_pool_nr_running;
+
+ return &(*nr_running)[0];
}

static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
__ret1 < 0 ? -1 : 0; \
})

+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+ struct worker_pool *pool;
+
+ for_each_worker_pool(pool, gcwq)
+ if (pool->flags & POOL_MANAGING_WORKERS)
+ return true;
+ return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+ struct worker_pool *pool;
+
+ for_each_worker_pool(pool, gcwq)
+ if (!list_empty(&pool->idle_list))
+ return true;
+ return false;
+}
+
static int __cpuinit trustee_thread(void *__gcwq)
{
struct global_cwq *gcwq = __gcwq;
+ struct worker_pool *pool;
struct worker *worker;
struct work_struct *work;
struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
* cancelled.
*/
BUG_ON(gcwq->cpu != smp_processor_id());
- rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+ rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
BUG_ON(rc < 0);

- gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+ for_each_worker_pool(pool, gcwq) {
+ pool->flags |= POOL_MANAGING_WORKERS;

- list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
- worker->flags |= WORKER_ROGUE;
+ list_for_each_entry(worker, &pool->idle_list, entry)
+ worker->flags |= WORKER_ROGUE;
+ }

for_each_busy_worker(worker, i, pos, gcwq)
worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
* keep_working() are always true as long as the worklist is
* not empty.
*/
- atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+ for_each_worker_pool(pool, gcwq)
+ atomic_set(get_pool_nr_running(pool), 0);

spin_unlock_irq(&gcwq->lock);
- del_timer_sync(&gcwq->pool.idle_timer);
+ for_each_worker_pool(pool, gcwq)
+ del_timer_sync(&pool->idle_timer);
spin_lock_irq(&gcwq->lock);

/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
* may be frozen works in freezable cwqs. Don't declare
* completion while frozen.
*/
- while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
- gcwq->flags & GCWQ_FREEZING ||
- gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
- int nr_works = 0;
+ while (true) {
+ bool busy = false;

- list_for_each_entry(work, &gcwq->pool.worklist, entry) {
- send_mayday(work);
- nr_works++;
- }
+ for_each_worker_pool(pool, gcwq)
+ busy |= pool->nr_workers != pool->nr_idle;

- list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
- if (!nr_works--)
- break;
- wake_up_process(worker->task);
- }
+ if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+ gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+ break;

- if (need_to_create_worker(&gcwq->pool)) {
- spin_unlock_irq(&gcwq->lock);
- worker = create_worker(&gcwq->pool, false);
- spin_lock_irq(&gcwq->lock);
- if (worker) {
- worker->flags |= WORKER_ROGUE;
- start_worker(worker);
+ for_each_worker_pool(pool, gcwq) {
+ int nr_works = 0;
+
+ list_for_each_entry(work, &pool->worklist, entry) {
+ send_mayday(work);
+ nr_works++;
+ }
+
+ list_for_each_entry(worker, &pool->idle_list, entry) {
+ if (!nr_works--)
+ break;
+ wake_up_process(worker->task);
+ }
+
+ if (need_to_create_worker(pool)) {
+ spin_unlock_irq(&gcwq->lock);
+ worker = create_worker(pool, false);
+ spin_lock_irq(&gcwq->lock);
+ if (worker) {
+ worker->flags |= WORKER_ROGUE;
+ start_worker(worker);
+ }
}
}

@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
* all workers till we're canceled.
*/
do {
- rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
- while (!list_empty(&gcwq->pool.idle_list))
- destroy_worker(list_first_entry(&gcwq->pool.idle_list,
- struct worker, entry));
- } while (gcwq->pool.nr_workers && rc >= 0);
+ rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ while (!list_empty(&pool->idle_list)) {
+ worker = list_first_entry(&pool->idle_list,
+ struct worker, entry);
+ destroy_worker(worker);
+ }
+ i |= pool->nr_workers;
+ }
+ } while (i && rc >= 0);

/*
* At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
* Tell the remaining busy ones to rebind once it finishes the
* currently scheduled works by scheduling the rebind_work.
*/
- WARN_ON(!list_empty(&gcwq->pool.idle_list));
+ for_each_worker_pool(pool, gcwq)
+ WARN_ON(!list_empty(&pool->idle_list));

for_each_busy_worker(worker, i, pos, gcwq) {
struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
}

/* relinquish manager role */
- gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+ for_each_worker_pool(pool, gcwq)
+ pool->flags &= ~POOL_MANAGING_WORKERS;

/* notify completion */
gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
unsigned int cpu = (unsigned long)hcpu;
struct global_cwq *gcwq = get_gcwq(cpu);
struct task_struct *new_trustee = NULL;
- struct worker *uninitialized_var(new_worker);
+ struct worker *new_workers[NR_WORKER_POOLS] = { };
+ struct worker_pool *pool;
unsigned long flags;
+ int i;

action &= ~CPU_TASKS_FROZEN;

@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
kthread_bind(new_trustee, cpu);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->pool.first_idle);
- new_worker = create_worker(&gcwq->pool, false);
- if (!new_worker) {
- if (new_trustee)
- kthread_stop(new_trustee);
- return NOTIFY_BAD;
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ BUG_ON(pool->first_idle);
+ new_workers[i] = create_worker(pool, false);
+ if (!new_workers[i++])
+ goto err_destroy;
}
}

@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->pool.first_idle);
- gcwq->pool.first_idle = new_worker;
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ BUG_ON(pool->first_idle);
+ pool->first_idle = new_workers[i++];
+ }
break;

case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
gcwq->trustee_state = TRUSTEE_BUTCHER;
/* fall through */
case CPU_UP_CANCELED:
- destroy_worker(gcwq->pool.first_idle);
- gcwq->pool.first_idle = NULL;
+ for_each_worker_pool(pool, gcwq) {
+ destroy_worker(pool->first_idle);
+ pool->first_idle = NULL;
+ }
break;

case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
* Put the first_idle in and request a real manager to
* take a look.
*/
- spin_unlock_irq(&gcwq->lock);
- kthread_bind(gcwq->pool.first_idle->task, cpu);
- spin_lock_irq(&gcwq->lock);
- gcwq->pool.flags |= POOL_MANAGE_WORKERS;
- start_worker(gcwq->pool.first_idle);
- gcwq->pool.first_idle = NULL;
+ for_each_worker_pool(pool, gcwq) {
+ spin_unlock_irq(&gcwq->lock);
+ kthread_bind(pool->first_idle->task, cpu);
+ spin_lock_irq(&gcwq->lock);
+ pool->flags |= POOL_MANAGE_WORKERS;
+ start_worker(pool->first_idle);
+ pool->first_idle = NULL;
+ }
break;
}

spin_unlock_irqrestore(&gcwq->lock, flags);

return notifier_from_errno(0);
+
+err_destroy:
+ if (new_trustee)
+ kthread_stop(new_trustee);
+
+ spin_lock_irqsave(&gcwq->lock, flags);
+ for (i = 0; i < NR_WORKER_POOLS; i++)
+ if (new_workers[i])
+ destroy_worker(new_workers[i]);
+ spin_unlock_irqrestore(&gcwq->lock, flags);
+
+ return NOTIFY_BAD;
}

#ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)

for_each_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool;
struct workqueue_struct *wq;

spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
cwq_activate_first_delayed(cwq);
}

- wake_up_worker(&gcwq->pool);
+ for_each_worker_pool(pool, gcwq)
+ wake_up_worker(pool);

spin_unlock_irq(&gcwq->lock);
}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
/* initialize gcwqs */
for_each_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool;

spin_lock_init(&gcwq->lock);
- gcwq->pool.gcwq = gcwq;
- INIT_LIST_HEAD(&gcwq->pool.worklist);
gcwq->cpu = cpu;
gcwq->flags |= GCWQ_DISASSOCIATED;

- INIT_LIST_HEAD(&gcwq->pool.idle_list);
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
INIT_HLIST_HEAD(&gcwq->busy_hash[i]);

- init_timer_deferrable(&gcwq->pool.idle_timer);
- gcwq->pool.idle_timer.function = idle_worker_timeout;
- gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+ for_each_worker_pool(pool, gcwq) {
+ pool->gcwq = gcwq;
+ INIT_LIST_HEAD(&pool->worklist);
+ INIT_LIST_HEAD(&pool->idle_list);
+
+ init_timer_deferrable(&pool->idle_timer);
+ pool->idle_timer.function = idle_worker_timeout;
+ pool->idle_timer.data = (unsigned long)pool;

- setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
- (unsigned long)&gcwq->pool);
+ setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+ (unsigned long)pool);

- ida_init(&gcwq->pool.worker_ida);
+ ida_init(&pool->worker_ida);
+ }

gcwq->trustee_state = TRUSTEE_DONE;
init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
/* create the initial worker */
for_each_online_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
- struct worker *worker;
+ struct worker_pool *pool;

if (cpu != WORK_CPU_UNBOUND)
gcwq->flags &= ~GCWQ_DISASSOCIATED;
- worker = create_worker(&gcwq->pool, true);
- BUG_ON(!worker);
- spin_lock_irq(&gcwq->lock);
- start_worker(worker);
- spin_unlock_irq(&gcwq->lock);
+
+ for_each_worker_pool(pool, gcwq) {
+ struct worker *worker;
+
+ worker = create_worker(pool, true);
+ BUG_ON(!worker);
+ spin_lock_irq(&gcwq->lock);
+ start_worker(worker);
+ spin_unlock_irq(&gcwq->lock);
+ }
}

system_wq = alloc_workqueue("events", 0, 0);
--
1.7.7.3

2012-07-14 03:56:56

by Tejun Heo

[permalink] [raw]
Subject: [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

>From 12f804d130d966f2a094e8037e9f163215d13f23 Mon Sep 17 00:00:00 2001
From: Tejun Heo <[email protected]>
Date: Fri, 13 Jul 2012 20:50:50 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist. Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool. NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri. Highpri workers get -20 nice level and has 'H' suffix in
their names. Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <[email protected]>
Reported-by: Josh Hunt <[email protected]>
LKML-Reference: <[email protected]om>
Cc: Tony Luck <[email protected]>
Cc: Fengguang Wu <[email protected]>
---
git branch updated accordingly. Thanks.

Documentation/workqueue.txt | 103 ++++++++++++++++---------------------------
kernel/workqueue.c | 100 +++++++++++------------------------------
2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.

The cmwq design differentiates between the user-facing workqueues that
subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.

The backend is called gcwq. There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues. Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.

Subsystems and drivers can create and queue work items through special
workqueue API functions as they see fit. They can influence some
aspects of the way the work items are executed by setting flags on the
workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more. To get a detailed overview refer to the API description of
alloc_workqueue() below.

-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq. For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool. For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.

For any worker pool implementation, managing the concurrency level
(how many execution contexts are active) is an important issue. cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
Minimal to save resources and sufficient in that the system is used at
its full capacity.

-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler. The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers. Generally, work items are not expected to
-hog a CPU and consume many cycles. That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal. As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items. This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler. The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers. Generally, work items are
+not expected to hog a CPU and consume many cycles. That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal. As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items. This allows using a minimal number of workers
+without losing execution bandwidth.

Keeping idle workers around doesn't cost other than the memory space
for kthreads, so cmwq holds onto idle ones for a while before killing
them.

For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible. The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible. The responsibility of regulating
concurrency level is on the users. There is also a flag to mark a
bound wq to ignore the concurrency management. Please refer to the
API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.

WQ_HIGHPRI

- Work items of a highpri wq are queued at the head of the
- worklist of the target gcwq and start execution regardless of
- the current concurrency level. In other words, highpri work
- items will always start execution as soon as execution
- resource is available.
+ Work items of a highpri wq are queued to the highpri
+ thread-pool of the target gcwq. Highpri thread-pools are
+ served by worker threads with elevated nice level.

- Ordering among highpri work items is preserved - a highpri
- work item queued after another highpri work item will start
- execution after the earlier highpri work item starts.
-
- Although highpri work items are not held back by other
- runnable work items, they still contribute to the concurrency
- level. Highpri work items in runnable state will prevent
- non-highpri work items from starting execution.
-
- This flag is meaningless for unbound wq.
+ Note that normal and highpri thread-pools don't interact with
+ each other. Each maintain its separate pool of workers and
+ implements concurrency management among its workers.

WQ_CPU_INTENSIVE

Work items of a CPU intensive wq do not contribute to the
concurrency level. In other words, runnable CPU intensive
- work items will not prevent other work items from starting
- execution. This is useful for bound work items which are
- expected to hog CPU cycles so that their execution is
- regulated by the system scheduler.
+ work items will not prevent other work items in the same
+ thread-pool from starting execution. This is useful for bound
+ work items which are expected to hog CPU cycles so that their
+ execution is regulated by the system scheduler.

Although CPU intensive work items don't contribute to the
concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.

This flag is meaningless for unbound wq.

- WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
- This combination makes the wq avoid interaction with
- concurrency management completely and behave as a simple
- per-CPU execution context provider. Work items queued on a
- highpri CPU-intensive wq start execution as soon as resources
- are available and don't affect execution of other work items.
-
@max_active:

@max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
35 w2 wakes up and finishes

Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS EVENT
- 0 w1 and w2 start and burn CPU
- 5 w1 sleeps
- 10 w2 sleeps
- 10 w0 starts and burns CPU
- 15 w0 sleeps
- 15 w1 wakes up and finishes
- 20 w2 wakes up and finishes
- 25 w0 wakes up and burns CPU
- 30 w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,

TIME IN MSECS EVENT
0 w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 82eee34..30d014b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
/* pool flags */
POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
- POOL_HIGHPRI_PENDING = 1 << 2, /* highpri works on queue */

/* worker flags */
WORKER_STARTED = 1 << 0, /* started */
@@ -74,7 +73,7 @@ enum {
TRUSTEE_RELEASE = 3, /* release workers */
TRUSTEE_DONE = 4, /* trustee is done */

- NR_WORKER_POOLS = 1, /* # worker pools per gcwq */
+ NR_WORKER_POOLS = 2, /* # worker pools per gcwq */

BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
BUSY_WORKER_HASH_SIZE = 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
* all cpus. Give -20.
*/
RESCUER_NICE_LEVEL = -20,
+ HIGHPRI_NICE_LEVEL = -20,
};

/*
@@ -174,7 +174,7 @@ struct global_cwq {
struct hlist_head busy_hash[BUSY_WORKER_HASH_SIZE];
/* L: hash of busy workers */

- struct worker_pool pool; /* the worker pools */
+ struct worker_pool pools[2]; /* normal and highpri pools */

struct task_struct *trustee; /* L: for gcwq shutdown */
unsigned int trustee_state; /* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
#include <trace/events/workqueue.h>

#define for_each_worker_pool(pool, gcwq) \
- for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+ for ((pool) = &(gcwq)->pools[0]; \
+ (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)

#define for_each_busy_worker(worker, i, pos, gcwq) \
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++) \
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {

static int worker_thread(void *__worker);

+static int worker_pool_pri(struct worker_pool *pool)
+{
+ return pool - pool->gcwq->pools;
+}
+
static struct global_cwq *get_gcwq(unsigned int cpu)
{
if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
else
nr_running = &unbound_pool_nr_running;

- return &(*nr_running)[0];
+ return &(*nr_running)[worker_pool_pri(pool)];
}

static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
}

/*
- * Policy functions. These define the policies on how the global
- * worker pool is managed. Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions. These define the policies on how the global worker
+ * pools are managed. Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
*/

static bool __need_more_worker(struct worker_pool *pool)
{
- return !atomic_read(get_pool_nr_running(pool)) ||
- (pool->flags & POOL_HIGHPRI_PENDING);
+ return !atomic_read(get_pool_nr_running(pool));
}

/*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
{
atomic_t *nr_running = get_pool_nr_running(pool);

- return !list_empty(&pool->worklist) &&
- (atomic_read(nr_running) <= 1 ||
- (pool->flags & POOL_HIGHPRI_PENDING));
+ return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
}

/* Do we need a new worker? Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
}

/**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work. If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue. This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
- struct cpu_workqueue_struct *cwq)
-{
- struct work_struct *twork;
-
- if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
- return &pool->worklist;
-
- list_for_each_entry(twork, &pool->worklist, entry) {
- struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
- if (!(tcwq->wq->flags & WQ_HIGHPRI))
- break;
- }
-
- pool->flags |= POOL_HIGHPRI_PENDING;
- return &twork->entry;
-}
-
-/**
* insert_work - insert a work into gcwq
* @cwq: cwq @work belongs to
* @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
if (likely(cwq->nr_active < cwq->max_active)) {
trace_workqueue_activate_work(work);
cwq->nr_active++;
- worklist = pool_determine_ins_pos(cwq->pool, cwq);
+ worklist = &cwq->pool->worklist;
} else {
work_flags |= WORK_STRUCT_DELAYED;
worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
{
struct global_cwq *gcwq = pool->gcwq;
bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+ const char *pri = worker_pool_pri(pool) ? "H" : "";
struct worker *worker = NULL;
int id = -1;

@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)

if (!on_unbound_cpu)
worker->task = kthread_create_on_node(worker_thread,
- worker,
- cpu_to_node(gcwq->cpu),
- "kworker/%u:%d", gcwq->cpu, id);
+ worker, cpu_to_node(gcwq->cpu),
+ "kworker/%u:%d%s", gcwq->cpu, id, pri);
else
worker->task = kthread_create(worker_thread, worker,
- "kworker/u:%d", id);
+ "kworker/u:%d%s", id, pri);
if (IS_ERR(worker->task))
goto fail;

+ if (worker_pool_pri(pool))
+ set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
/*
* A rogue worker will become a regular one if CPU comes
* online later on. Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
{
struct work_struct *work = list_first_entry(&cwq->delayed_works,
struct work_struct, entry);
- struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);

trace_workqueue_activate_work(work);
- move_linked_works(work, pos, NULL);
+ move_linked_works(work, &cwq->pool->worklist, NULL);
__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
cwq->nr_active++;
}
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
list_del_init(&work->entry);

/*
- * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
- * wake up another worker; otherwise, clear HIGHPRI_PENDING.
- */
- if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
- struct work_struct *nwork = list_first_entry(&pool->worklist,
- struct work_struct, entry);
-
- if (!list_empty(&pool->worklist) &&
- get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
- wake_up_worker(pool);
- else
- pool->flags &= ~POOL_HIGHPRI_PENDING;
- }
-
- /*
* CPU intensive works don't participate in concurrency
* management. They're the scheduler's responsibility.
*/
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
struct global_cwq *gcwq = get_gcwq(cpu);
+ int pool_idx = (bool)(flags & WQ_HIGHPRI);

BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
- cwq->pool = &gcwq->pool;
+ cwq->pool = &gcwq->pools[pool_idx];
cwq->wq = wq;
cwq->flush_color = -1;
cwq->max_active = max_active;
--
1.7.7.3

2012-07-14 04:27:25

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()

Seeing code like this

+ return &(*nr_running)[0];

just makes me go "WTF?"

Why are you taking the address of something you just dereferenced (the
"& [0]" part).

And you actually do that *twice*, except the inner one is more
complicated. When you assign nr_runing, you take the address of it, so
the "*nr_running" is actually just the same kind of odd thing (except
in reverse - you take dereference something you just took the
address-of).

Seriously, this to me is a sign of *deeply* confused code. And the
fact that your first version of that code was buggy *EXACTLY* due to
this confusion should have made you take a step back.

As far as I can tell, what you actually want that function to do is:

static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
int cpu = pool->gcwq->cpu;

if (cpu != WORK_CPU_UNBOUND)
return per_cpu(pool_nr_running, cpu);

return unbound_pool_nr_running;
}

Notice how there isn't an 'address-of' operator anywhere in sight
there. Those things are arrays, they get turned into "atomic_t *"
automatically. And there isn't a single dereference (not a '*', and
not a "[0]" - they are the exact same thing, btw) in sight either.

What am I missing? Are there some new drugs that all the cool kids
chew that I should be trying? Because I really don't think the kinds
of insane "take the address of a dereference" games are a good idea.
They really look to me like somebody is having a really bad drug
experience.

I didn't test the code, btw. I just looked at the patch and went WTF.

Linus

2012-07-14 04:44:38

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()

Hello, Linus.

On Fri, Jul 13, 2012 at 09:27:03PM -0700, Linus Torvalds wrote:
> Seeing code like this
>
> + return &(*nr_running)[0];
>
> just makes me go "WTF?"

I was going WTF too. This was the smallest fix and I wanted to make
it minimal because there's another stack of patches on top of it.
Planning to just fold nr_running into worker_pool afterwards which
will remove the whole function.

> Why are you taking the address of something you just dereferenced (the
> "& [0]" part).

nr_running is atomic_t (*nr_running)[2]. Ignoring the pointer to
array part, it's just returning the address of N'th element of the
array. ARRAY + N == &ARRAY[N].

> And you actually do that *twice*, except the inner one is more
> complicated. When you assign nr_runing, you take the address of it, so
> the "*nr_running" is actually just the same kind of odd thing (except
> in reverse - you take dereference something you just took the
> address-of).
>
> Seriously, this to me is a sign of *deeply* confused code. And the
> fact that your first version of that code was buggy *EXACTLY* due to
> this confusion should have made you take a step back.

Type-wise, I don't think it's confused. Ah okay, you're looking at
the fifth patch in isolation. Upto this point, the index is always 0.
I'm puttin it in as a placeholder for the next patch which makes use
of non-zero index. This patch is supposed to prepare everything for
multiple pools and thus non-zero index.

> As far as I can tell, what you actually want that function to do is:
>
> static atomic_t *get_pool_nr_running(struct worker_pool *pool)
> {
> int cpu = pool->gcwq->cpu;
>
> if (cpu != WORK_CPU_UNBOUND)
> return per_cpu(pool_nr_running, cpu);
>
> return unbound_pool_nr_running;
> }

More like the folloiwng in the end.

static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
int cpu = pool->gcwq->cpu;
int is_highpri = pool_is_highpri(pool);

if (cpu != WORK_CPU_UNBOUND)
return &per_cpu(pool_nr_running, cpu)[is_highpri];

return &unbound_pool_nr_running[is_highpri];
}

> I didn't test the code, btw. I just looked at the patch and went WTF.

Eh... yeah, with or without [2], this is WTF. I'll just refresh it
with the above version.

Thanks.

--
tejun

2012-07-14 05:00:10

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()

On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <[email protected]> wrote:
>
> nr_running is atomic_t (*nr_running)[2]. Ignoring the pointer to
> array part, it's just returning the address of N'th element of the
> array. ARRAY + N == &ARRAY[N].

None of this matters one whit.

You did "&(x)[0]".

That's insane. It's crazy. It doesn't even matter what "x" is in
between, it's crazy regardless.

It's just a really confused way of saying "x" (*). Except it makes the
code look like an insane monkey on crack got a-hold of your keyboard
when you weren't looking.

And to make it worse, "x" itself was the result of doing "*&y". Which
was probably written by the insane monkey's older brother, Max, who
has been chewing Quaaludes for a few years, and as a result _his_
brain really isn't doing too well either. Even for a monkey. And now
you're letting *him* at your keyboard too?

So you had two separately (but similarly) insane ways of complicating
the code so that it was really obfuscated. When it really just
computed "y" to begin with, it just added all those "x=*&y" and
"&(x)[0]" games around it to make it look complicated.

Linus

(*) Technically, "&(x)[0]" is actually a really confused way of saying
"(x+0)" while making sure that "x" was a valid pointer. It basically
guarantees that if "x" started out as an array, it has now been
demoted to a pointer - but since arrays will be demoted to pointers by
pretty much any subsequent operation except for "sizeof()" and a
couple of other special cases anyway, you can pretty much just say
that "&(x)[0]" is "(x+0)" is "x".

And "*&y" really is exactly the same as "y", except for again some
syntactic checking (ie it is basically an odd way to verify that "y"
is an lvalue, since you cannot do an address-of of a non-lvalue).

2012-07-14 05:07:50

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()

Hey, Linus.

On Fri, Jul 13, 2012 at 10:00:10PM -0700, Linus Torvalds wrote:
> On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <[email protected]> wrote:
> >
> > nr_running is atomic_t (*nr_running)[2]. Ignoring the pointer to
> > array part, it's just returning the address of N'th element of the
> > array. ARRAY + N == &ARRAY[N].
>
> None of this matters one whit.
>
> You did "&(x)[0]".
>
> That's insane. It's crazy. It doesn't even matter what "x" is in
> between, it's crazy regardless.

Eh, from my previous reply.

| Ah okay, you're looking at the fifth patch in isolation. Upto this
| point, the index is always 0. I'm puttin it in as a placeholder for
| the next patch which makes use of non-zero index. This patch is
| supposed to prepare everything for multiple pools and thus non-zero
| index.

The patch is about converting stuff to handle size-1 array without
introducing any actual behavior change so that the next patch can bump
the array size and just change the index.

Thanks.

--
tejun

2012-07-14 05:24:18

by Tejun Heo

[permalink] [raw]
Subject: [PATCH UPDATED v3 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

>From a465fcee388d62d22e390b57c81ca8411f25a1da Mon Sep 17 00:00:00 2001
From: Tejun Heo <[email protected]>
Date: Fri, 13 Jul 2012 22:16:45 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist. Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool. NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri. Highpri workers get -20 nice level and has 'H' suffix in
their names. Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Refreshed for the get_pool_nr_running() update in the previous
patch.

Signed-off-by: Tejun Heo <[email protected]>
Reported-by: Josh Hunt <[email protected]>
LKML-Reference: <[email protected]om>
Cc: Tony Luck <[email protected]>
Cc: Fengguang Wu <[email protected]>
---
Documentation/workqueue.txt | 103 ++++++++++++++++---------------------------
kernel/workqueue.c | 100 +++++++++++------------------------------
2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.

The cmwq design differentiates between the user-facing workqueues that
subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.

The backend is called gcwq. There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues. Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.

Subsystems and drivers can create and queue work items through special
workqueue API functions as they see fit. They can influence some
aspects of the way the work items are executed by setting flags on the
workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more. To get a detailed overview refer to the API description of
alloc_workqueue() below.

-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq. For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool. For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.

For any worker pool implementation, managing the concurrency level
(how many execution contexts are active) is an important issue. cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
Minimal to save resources and sufficient in that the system is used at
its full capacity.

-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler. The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers. Generally, work items are not expected to
-hog a CPU and consume many cycles. That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal. As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items. This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler. The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers. Generally, work items are
+not expected to hog a CPU and consume many cycles. That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal. As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items. This allows using a minimal number of workers
+without losing execution bandwidth.

Keeping idle workers around doesn't cost other than the memory space
for kthreads, so cmwq holds onto idle ones for a while before killing
them.

For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible. The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible. The responsibility of regulating
concurrency level is on the users. There is also a flag to mark a
bound wq to ignore the concurrency management. Please refer to the
API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.

WQ_HIGHPRI

- Work items of a highpri wq are queued at the head of the
- worklist of the target gcwq and start execution regardless of
- the current concurrency level. In other words, highpri work
- items will always start execution as soon as execution
- resource is available.
+ Work items of a highpri wq are queued to the highpri
+ thread-pool of the target gcwq. Highpri thread-pools are
+ served by worker threads with elevated nice level.

- Ordering among highpri work items is preserved - a highpri
- work item queued after another highpri work item will start
- execution after the earlier highpri work item starts.
-
- Although highpri work items are not held back by other
- runnable work items, they still contribute to the concurrency
- level. Highpri work items in runnable state will prevent
- non-highpri work items from starting execution.
-
- This flag is meaningless for unbound wq.
+ Note that normal and highpri thread-pools don't interact with
+ each other. Each maintain its separate pool of workers and
+ implements concurrency management among its workers.

WQ_CPU_INTENSIVE

Work items of a CPU intensive wq do not contribute to the
concurrency level. In other words, runnable CPU intensive
- work items will not prevent other work items from starting
- execution. This is useful for bound work items which are
- expected to hog CPU cycles so that their execution is
- regulated by the system scheduler.
+ work items will not prevent other work items in the same
+ thread-pool from starting execution. This is useful for bound
+ work items which are expected to hog CPU cycles so that their
+ execution is regulated by the system scheduler.

Although CPU intensive work items don't contribute to the
concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.

This flag is meaningless for unbound wq.

- WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
- This combination makes the wq avoid interaction with
- concurrency management completely and behave as a simple
- per-CPU execution context provider. Work items queued on a
- highpri CPU-intensive wq start execution as soon as resources
- are available and don't affect execution of other work items.
-
@max_active:

@max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
35 w2 wakes up and finishes

Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS EVENT
- 0 w1 and w2 start and burn CPU
- 5 w1 sleeps
- 10 w2 sleeps
- 10 w0 starts and burns CPU
- 15 w0 sleeps
- 15 w1 wakes up and finishes
- 20 w2 wakes up and finishes
- 25 w0 wakes up and burns CPU
- 30 w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,

TIME IN MSECS EVENT
0 w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b0daaea..4fa9e35 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
/* pool flags */
POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
- POOL_HIGHPRI_PENDING = 1 << 2, /* highpri works on queue */

/* worker flags */
WORKER_STARTED = 1 << 0, /* started */
@@ -74,7 +73,7 @@ enum {
TRUSTEE_RELEASE = 3, /* release workers */
TRUSTEE_DONE = 4, /* trustee is done */

- NR_WORKER_POOLS = 1, /* # worker pools per gcwq */
+ NR_WORKER_POOLS = 2, /* # worker pools per gcwq */

BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
BUSY_WORKER_HASH_SIZE = 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
* all cpus. Give -20.
*/
RESCUER_NICE_LEVEL = -20,
+ HIGHPRI_NICE_LEVEL = -20,
};

/*
@@ -174,7 +174,7 @@ struct global_cwq {
struct hlist_head busy_hash[BUSY_WORKER_HASH_SIZE];
/* L: hash of busy workers */

- struct worker_pool pool; /* the worker pools */
+ struct worker_pool pools[2]; /* normal and highpri pools */

struct task_struct *trustee; /* L: for gcwq shutdown */
unsigned int trustee_state; /* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
#include <trace/events/workqueue.h>

#define for_each_worker_pool(pool, gcwq) \
- for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+ for ((pool) = &(gcwq)->pools[0]; \
+ (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)

#define for_each_busy_worker(worker, i, pos, gcwq) \
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++) \
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {

static int worker_thread(void *__worker);

+static int worker_pool_pri(struct worker_pool *pool)
+{
+ return pool - pool->gcwq->pools;
+}
+
static struct global_cwq *get_gcwq(unsigned int cpu)
{
if (cpu != WORK_CPU_UNBOUND)
@@ -484,7 +490,7 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
int cpu = pool->gcwq->cpu;
- int idx = 0;
+ int idx = worker_pool_pri(pool);

if (cpu != WORK_CPU_UNBOUND)
return &per_cpu(pool_nr_running, cpu)[idx];
@@ -586,15 +592,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
}

/*
- * Policy functions. These define the policies on how the global
- * worker pool is managed. Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions. These define the policies on how the global worker
+ * pools are managed. Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
*/

static bool __need_more_worker(struct worker_pool *pool)
{
- return !atomic_read(get_pool_nr_running(pool)) ||
- (pool->flags & POOL_HIGHPRI_PENDING);
+ return !atomic_read(get_pool_nr_running(pool));
}

/*
@@ -621,9 +626,7 @@ static bool keep_working(struct worker_pool *pool)
{
atomic_t *nr_running = get_pool_nr_running(pool);

- return !list_empty(&pool->worklist) &&
- (atomic_read(nr_running) <= 1 ||
- (pool->flags & POOL_HIGHPRI_PENDING));
+ return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
}

/* Do we need a new worker? Called from manager. */
@@ -892,43 +895,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
}

/**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work. If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue. This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
- struct cpu_workqueue_struct *cwq)
-{
- struct work_struct *twork;
-
- if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
- return &pool->worklist;
-
- list_for_each_entry(twork, &pool->worklist, entry) {
- struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
- if (!(tcwq->wq->flags & WQ_HIGHPRI))
- break;
- }
-
- pool->flags |= POOL_HIGHPRI_PENDING;
- return &twork->entry;
-}
-
-/**
* insert_work - insert a work into gcwq
* @cwq: cwq @work belongs to
* @work: work to insert
@@ -1068,7 +1034,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
if (likely(cwq->nr_active < cwq->max_active)) {
trace_workqueue_activate_work(work);
cwq->nr_active++;
- worklist = pool_determine_ins_pos(cwq->pool, cwq);
+ worklist = &cwq->pool->worklist;
} else {
work_flags |= WORK_STRUCT_DELAYED;
worklist = &cwq->delayed_works;
@@ -1385,6 +1351,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
{
struct global_cwq *gcwq = pool->gcwq;
bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+ const char *pri = worker_pool_pri(pool) ? "H" : "";
struct worker *worker = NULL;
int id = -1;

@@ -1406,15 +1373,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)

if (!on_unbound_cpu)
worker->task = kthread_create_on_node(worker_thread,
- worker,
- cpu_to_node(gcwq->cpu),
- "kworker/%u:%d", gcwq->cpu, id);
+ worker, cpu_to_node(gcwq->cpu),
+ "kworker/%u:%d%s", gcwq->cpu, id, pri);
else
worker->task = kthread_create(worker_thread, worker,
- "kworker/u:%d", id);
+ "kworker/u:%d%s", id, pri);
if (IS_ERR(worker->task))
goto fail;

+ if (worker_pool_pri(pool))
+ set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
/*
* A rogue worker will become a regular one if CPU comes
* online later on. Make sure every worker has
@@ -1761,10 +1730,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
{
struct work_struct *work = list_first_entry(&cwq->delayed_works,
struct work_struct, entry);
- struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);

trace_workqueue_activate_work(work);
- move_linked_works(work, pos, NULL);
+ move_linked_works(work, &cwq->pool->worklist, NULL);
__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
cwq->nr_active++;
}
@@ -1880,21 +1848,6 @@ __acquires(&gcwq->lock)
list_del_init(&work->entry);

/*
- * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
- * wake up another worker; otherwise, clear HIGHPRI_PENDING.
- */
- if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
- struct work_struct *nwork = list_first_entry(&pool->worklist,
- struct work_struct, entry);
-
- if (!list_empty(&pool->worklist) &&
- get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
- wake_up_worker(pool);
- else
- pool->flags &= ~POOL_HIGHPRI_PENDING;
- }
-
- /*
* CPU intensive works don't participate in concurrency
* management. They're the scheduler's responsibility.
*/
@@ -3047,9 +3000,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
struct global_cwq *gcwq = get_gcwq(cpu);
+ int pool_idx = (bool)(flags & WQ_HIGHPRI);

BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
- cwq->pool = &gcwq->pool;
+ cwq->pool = &gcwq->pools[pool_idx];
cwq->wq = wq;
cwq->flush_color = -1;
cwq->max_active = max_active;
--
1.7.7.3

2012-07-14 08:18:17

by Fengguang Wu

[permalink] [raw]
Subject: Re: [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool

> v2: nr_running indexing bug in get_pool_nr_running() fixed.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reported-by: Josh Hunt <[email protected]>
> LKML-Reference: <[email protected]om>
> Cc: Tony Luck <[email protected]>
> Cc: Fengguang Wu <[email protected]>
> ---
> git branch updated accordingly. Thanks.

It works now, thank you very much!

Tested-by: Fengguang Wu <[email protected]>