2006-05-17 22:12:40

by Chris Wright

[permalink] [raw]
Subject: [PATCH 13/22] [PATCH] [BLOCK] limit request_fn recursion

-stable review patch. If anyone has any objections, please let us know.
------------------

Don't recurse back into the driver even if the unplug threshold is met,
when the driver asks for a requeue. This is both silly from a logical
point of view (requeues typically happen due to driver/hardware
shortage), and also dangerous since we could hit an endless request_fn
-> requeue -> unplug -> request_fn loop and crash on stack overrun.

Also limit blk_run_queue() to one level of recursion, similar to how
blk_start_queue() works.

This patch fixed a real problem with SLES10 and lpfc, and it could hit
any SCSI lld that returns non-zero from it's ->queuecommand() handler.

Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Chris Wright <[email protected]>

---
block/elevator.c | 8 +++++++-
block/ll_rw_blk.c | 17 +++++++++++++++--
2 files changed, 22 insertions(+), 3 deletions(-)

--- linux-2.6.16.16.orig/block/elevator.c
+++ linux-2.6.16.16/block/elevator.c
@@ -314,6 +314,7 @@ void elv_insert(request_queue_t *q, stru
{
struct list_head *pos;
unsigned ordseq;
+ int unplug_it = 1;

rq->q = q;

@@ -378,6 +379,11 @@ void elv_insert(request_queue_t *q, stru
}

list_add_tail(&rq->queuelist, pos);
+ /*
+ * most requeues happen because of a busy condition, don't
+ * force unplug of the queue for that case.
+ */
+ unplug_it = 0;
break;

default:
@@ -386,7 +392,7 @@ void elv_insert(request_queue_t *q, stru
BUG();
}

- if (blk_queue_plugged(q)) {
+ if (unplug_it && blk_queue_plugged(q)) {
int nrq = q->rq.count[READ] + q->rq.count[WRITE]
- q->in_flight;

--- linux-2.6.16.16.orig/block/ll_rw_blk.c
+++ linux-2.6.16.16/block/ll_rw_blk.c
@@ -1719,8 +1719,21 @@ void blk_run_queue(struct request_queue

spin_lock_irqsave(q->queue_lock, flags);
blk_remove_plug(q);
- if (!elv_queue_empty(q))
- q->request_fn(q);
+
+ /*
+ * Only recurse once to avoid overrunning the stack, let the unplug
+ * handling reinvoke the handler shortly if we already got there.
+ */
+ if (!elv_queue_empty(q)) {
+ if (!test_and_set_bit(QUEUE_FLAG_REENTER, &q->queue_flags)) {
+ q->request_fn(q);
+ clear_bit(QUEUE_FLAG_REENTER, &q->queue_flags);
+ } else {
+ blk_plug_device(q);
+ kblockd_schedule_work(&q->unplug_work);
+ }
+ }
+
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_run_queue);

--