2009-03-13 05:03:50

by Tejun Heo

[permalink] [raw]
Subject: [GIT PATCH] block: cleanup patches


Hello,

This patchset is available in the following git tree.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup

This patchset contains the following 14 cleanup patches.

0001-block-merge-blk_invoke_request_fn-into-__blk_run_.patch
0002-block-kill-blk_start_queueing.patch
0003-block-don-t-set-REQ_NOMERGE-unnecessarily.patch
0004-block-cleanup-REQ_SOFTBARRIER-usages.patch
0005-block-clean-up-misc-stuff-after-block-layer-timeout.patch
0006-block-reorder-request-completion-functions.patch
0007-block-reorganize-request-fetching-functions.patch
0008-block-kill-blk_end_request_callback.patch
0009-block-clean-up-request-completion-API.patch
0010-block-move-rq-start_time-initialization-to-blk_rq_.patch
0011-block-implement-and-use-__-blk_end_request_all.patch
0012-block-kill-end_request.patch
0013-ubd-simplify-block-request-completion.patch
0014-block-clean-up-unnecessary-stuff-from-block-drivers.patch

* 0001-0007: cleanups in block layer proper
* 0008 : kill blk_end_request_callback()
* 0009 : further completion cleanup in block layer proper after 0008
* 0010 : rq->start_time is always initialized
* 0011 : [__]blk_end_request_all() added and used
* 0012 : kill end_request()
* 0013-0014: lld cleanup

It's on top of the current linux-2.6-block/for-2.6.30[1] and comes
with nice diffstat.

And comes with the following nice diffstat. :-)

arch/arm/plat-omap/mailbox.c | 11
arch/um/drivers/ubd_kern.c | 23 -
block/as-iosched.c | 6
block/blk-barrier.c | 9
block/blk-core.c | 481 ++++++++++++++----------------------
block/blk-exec.c | 1
block/blk-timeout.c | 22 -
block/blk.h | 37 ++
block/cfq-iosched.c | 10
block/elevator.c | 137 ----------
drivers/block/amiflop.c | 15 -
drivers/block/ataflop.c | 18 -
drivers/block/cciss.c | 3
drivers/block/cpqarray.c | 3
drivers/block/hd.c | 14 -
drivers/block/paride/pcd.c | 12
drivers/block/paride/pd.c | 5
drivers/block/paride/pf.c | 28 +-
drivers/block/ps3disk.c | 6
drivers/block/swim3.c | 26 -
drivers/block/sx8.c | 3
drivers/block/virtio_blk.c | 2
drivers/block/xd.c | 22 -
drivers/block/xen-blkfront.c | 6
drivers/block/xsysace.c | 4
drivers/block/z2ram.c | 4
drivers/cdrom/gdrom.c | 8
drivers/cdrom/viocd.c | 25 -
drivers/ide/ide-cd.c | 30 --
drivers/ide/ide-disk.c | 1
drivers/ide/ide-io.c | 4
drivers/ide/ide-ioctls.c | 1
drivers/ide/ide-park.c | 7
drivers/ide/ide-pm.c | 3
drivers/memstick/core/mspro_block.c | 2
drivers/message/i2o/i2o_block.c | 2
drivers/mtd/mtd_blkdevs.c | 22 -
drivers/s390/block/dasd.c | 17 -
drivers/s390/char/tape_block.c | 15 -
drivers/sbus/char/jsflash.c | 8
drivers/scsi/scsi_lib.c | 2
include/linux/blkdev.h | 139 ++++++++--
42 files changed, 488 insertions(+), 706 deletions(-)

Thanks.

--
tejun

[1] 6319ec3182b26abecd2fa9ab97c945f0161d4e36


2009-03-13 05:03:33

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 01/14] block: merge blk_invoke_request_fn() into __blk_run_queue()

Impact: merge two subtly different internal functions

__blk_run_queue wraps blk_invoke_request_fn() such that it
additionally removes plug and bails out early if the queue is empty.
Both extra operations have their own pending mechanisms and don't
cause any harm correctness-wise when they are done superflously.

The only user of blk_invoke_request_fn() being blk_start_queue(),
there isn't much reason to keep both functions around. Merge
blk_invoke_request_fn() into __blk_run_queue() and make
blk_start_queue() use __blk_run_queue() instead.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 35 ++++++++++++++---------------------
1 files changed, 14 insertions(+), 21 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 7b63c9b..95dc76f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -333,24 +333,6 @@ void blk_unplug(struct request_queue *q)
}
EXPORT_SYMBOL(blk_unplug);

-static void blk_invoke_request_fn(struct request_queue *q)
-{
- if (unlikely(blk_queue_stopped(q)))
- return;
-
- /*
- * one level of recursion is ok and is much faster than kicking
- * the unplug handling
- */
- if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
- q->request_fn(q);
- queue_flag_clear(QUEUE_FLAG_REENTER, q);
- } else {
- queue_flag_set(QUEUE_FLAG_PLUGGED, q);
- kblockd_schedule_work(q, &q->unplug_work);
- }
-}
-
/**
* blk_start_queue - restart a previously stopped queue
* @q: The &struct request_queue in question
@@ -365,7 +347,7 @@ void blk_start_queue(struct request_queue *q)
WARN_ON(!irqs_disabled());

queue_flag_clear(QUEUE_FLAG_STOPPED, q);
- blk_invoke_request_fn(q);
+ __blk_run_queue(q);
}
EXPORT_SYMBOL(blk_start_queue);

@@ -425,12 +407,23 @@ void __blk_run_queue(struct request_queue *q)
{
blk_remove_plug(q);

+ if (unlikely(blk_queue_stopped(q)))
+ return;
+
+ if (elv_queue_empty(q))
+ return;
+
/*
* Only recurse once to avoid overrunning the stack, let the unplug
* handling reinvoke the handler shortly if we already got there.
*/
- if (!elv_queue_empty(q))
- blk_invoke_request_fn(q);
+ if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
+ q->request_fn(q);
+ queue_flag_clear(QUEUE_FLAG_REENTER, q);
+ } else {
+ queue_flag_set(QUEUE_FLAG_PLUGGED, q);
+ kblockd_schedule_work(q, &q->unplug_work);
+ }
}
EXPORT_SYMBOL(__blk_run_queue);

--
1.6.0.2

2009-03-13 05:04:12

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 02/14] block: kill blk_start_queueing()

Impact: removal of mostly duplicate interface function

blk_start_queueing() is identical to __blk_run_queue() except that it
doesn't check for recursion. None of the current users depends on
blk_start_queueing() running request_fn directly. Replace usages of
blk_start_queueing() with [__]blk_run_queue() and kill it.

Signed-off-by: Tejun Heo <[email protected]>
---
block/as-iosched.c | 6 +-----
block/blk-core.c | 28 ++--------------------------
block/cfq-iosched.c | 10 +++-------
block/elevator.c | 7 +++----
drivers/ide/ide-park.c | 7 ++-----
include/linux/blkdev.h | 1 -
6 files changed, 11 insertions(+), 48 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index 631f6f4..da8e272 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1315,12 +1315,8 @@ static void as_merged_requests(struct request_queue *q, struct request *req,
static void as_work_handler(struct work_struct *work)
{
struct as_data *ad = container_of(work, struct as_data, antic_work);
- struct request_queue *q = ad->q;
- unsigned long flags;

- spin_lock_irqsave(q->queue_lock, flags);
- blk_start_queueing(q);
- spin_unlock_irqrestore(q->queue_lock, flags);
+ blk_run_queue(ad->q);
}

static int as_may_queue(struct request_queue *q, int rw)
diff --git a/block/blk-core.c b/block/blk-core.c
index 95dc76f..7c2d836 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -433,9 +433,7 @@ EXPORT_SYMBOL(__blk_run_queue);
*
* Description:
* Invoke request handling on this queue, if it has pending work to do.
- * May be used to restart queueing when a request has completed. Also
- * See @blk_start_queueing.
- *
+ * May be used to restart queueing when a request has completed.
*/
void blk_run_queue(struct request_queue *q)
{
@@ -893,28 +891,6 @@ struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
EXPORT_SYMBOL(blk_get_request);

/**
- * blk_start_queueing - initiate dispatch of requests to device
- * @q: request queue to kick into gear
- *
- * This is basically a helper to remove the need to know whether a queue
- * is plugged or not if someone just wants to initiate dispatch of requests
- * for this queue. Should be used to start queueing on a device outside
- * of ->request_fn() context. Also see @blk_run_queue.
- *
- * The queue lock must be held with interrupts disabled.
- */
-void blk_start_queueing(struct request_queue *q)
-{
- if (!blk_queue_plugged(q)) {
- if (unlikely(blk_queue_stopped(q)))
- return;
- q->request_fn(q);
- } else
- __generic_unplug_device(q);
-}
-EXPORT_SYMBOL(blk_start_queueing);
-
-/**
* blk_requeue_request - put a request back on queue
* @q: request queue where request should be inserted
* @rq: request to be inserted
@@ -982,7 +958,7 @@ void blk_insert_request(struct request_queue *q, struct request *rq,

drive_stat_acct(rq, 1);
__elv_add_request(q, rq, where, 0);
- blk_start_queueing(q);
+ __blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_insert_request);
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 664ebfd..8190db8 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1900,7 +1900,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
if (cfq_cfqq_wait_request(cfqq)) {
cfq_mark_cfqq_must_dispatch(cfqq);
del_timer(&cfqd->idle_slice_timer);
- blk_start_queueing(cfqd->queue);
+ __blk_run_queue(cfqd->queue);
}
} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
/*
@@ -1911,7 +1911,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
*/
cfq_preempt_queue(cfqd, cfqq);
cfq_mark_cfqq_must_dispatch(cfqq);
- blk_start_queueing(cfqd->queue);
+ __blk_run_queue(cfqd->queue);
}
}

@@ -2143,12 +2143,8 @@ static void cfq_kick_queue(struct work_struct *work)
{
struct cfq_data *cfqd =
container_of(work, struct cfq_data, unplug_work);
- struct request_queue *q = cfqd->queue;
- unsigned long flags;

- spin_lock_irqsave(q->queue_lock, flags);
- blk_start_queueing(q);
- spin_unlock_irqrestore(q->queue_lock, flags);
+ blk_run_queue(cfqd->queue);
}

/*
diff --git a/block/elevator.c b/block/elevator.c
index 98259ed..fca4436 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -618,8 +618,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
* with anything. There's no point in delaying queue
* processing.
*/
- blk_remove_plug(q);
- blk_start_queueing(q);
+ __blk_run_queue(q);
break;

case ELEVATOR_INSERT_SORT:
@@ -946,7 +945,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
blk_ordered_cur_seq(q) == QUEUE_ORDSEQ_DRAIN &&
(!next || blk_ordered_req_seq(next) > QUEUE_ORDSEQ_DRAIN)) {
blk_ordered_complete_seq(q, QUEUE_ORDSEQ_DRAIN, 0);
- blk_start_queueing(q);
+ __blk_run_queue(q);
}
}
}
@@ -1107,7 +1106,7 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
elv_drain_elevator(q);

while (q->rq.elvpriv) {
- blk_start_queueing(q);
+ __blk_run_queue(q);
spin_unlock_irq(q->queue_lock);
msleep(10);
spin_lock_irq(q->queue_lock);
diff --git a/drivers/ide/ide-park.c b/drivers/ide/ide-park.c
index c875a95..5f121f7 100644
--- a/drivers/ide/ide-park.c
+++ b/drivers/ide/ide-park.c
@@ -24,11 +24,8 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
start_queue = 1;
spin_unlock_irq(&hwif->lock);

- if (start_queue) {
- spin_lock_irq(q->queue_lock);
- blk_start_queueing(q);
- spin_unlock_irq(q->queue_lock);
- }
+ if (start_queue)
+ blk_run_queue(q);
return;
}
spin_unlock_irq(&hwif->lock);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 465d6ba..4c05bb9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -775,7 +775,6 @@ extern void blk_sync_queue(struct request_queue *q);
extern void __blk_stop_queue(struct request_queue *q);
extern void __blk_run_queue(struct request_queue *);
extern void blk_run_queue(struct request_queue *);
-extern void blk_start_queueing(struct request_queue *);
extern int blk_rq_map_user(struct request_queue *, struct request *,
struct rq_map_data *, void __user *, unsigned long,
gfp_t);
--
1.6.0.2

2009-03-13 05:04:29

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 03/14] block: don't set REQ_NOMERGE unnecessarily

Impact: cleanup

RQ_NOMERGE_FLAGS already clears defines which REQ flags aren't
mergeable. There is no reason to specify it superflously. It only
adds to confusion. Don't set REQ_NOMERGE for barriers and requests
with specific queueing directive. REQ_NOMERGE is now exclusively used
by the merging code.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 5 +----
block/blk-exec.c | 1 -
2 files changed, 1 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 7c2d836..d7b2cc9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1077,16 +1077,13 @@ void init_request_from_bio(struct request *req, struct bio *bio)
if (bio_failfast_driver(bio))
req->cmd_flags |= REQ_FAILFAST_DRIVER;

- /*
- * REQ_BARRIER implies no merging, but lets make it explicit
- */
if (unlikely(bio_discard(bio))) {
req->cmd_flags |= REQ_DISCARD;
if (bio_barrier(bio))
req->cmd_flags |= REQ_SOFTBARRIER;
req->q->prepare_discard_fn(req->q, req);
} else if (unlikely(bio_barrier(bio)))
- req->cmd_flags |= (REQ_HARDBARRIER | REQ_NOMERGE);
+ req->cmd_flags |= REQ_HARDBARRIER;

if (bio_sync(bio))
req->cmd_flags |= REQ_RW_SYNC;
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 6af716d..49557e9 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -51,7 +51,6 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;

rq->rq_disk = bd_disk;
- rq->cmd_flags |= REQ_NOMERGE;
rq->end_io = done;
WARN_ON(irqs_disabled());
spin_lock_irq(q->queue_lock);
--
1.6.0.2

2009-03-13 05:05:10

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 05/14] block: clean up misc stuff after block layer timeout conversion

Impact: cleanup

* In blk_rq_timed_out_timer(), else { if } to else if

* In blk_add_timer(), simplify if/else block

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-timeout.c | 22 +++++++++-------------
1 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index bbbdc4b..0bc3961 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -122,10 +122,8 @@ void blk_rq_timed_out_timer(unsigned long data)
if (blk_mark_rq_complete(rq))
continue;
blk_rq_timed_out(rq);
- } else {
- if (!next || time_after(next, rq->deadline))
- next = rq->deadline;
- }
+ } else if (!next || time_after(next, rq->deadline))
+ next = rq->deadline;
}

/*
@@ -176,16 +174,14 @@ void blk_add_timer(struct request *req)
BUG_ON(!list_empty(&req->timeout_list));
BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags));

- if (req->timeout)
- req->deadline = jiffies + req->timeout;
- else {
- req->deadline = jiffies + q->rq_timeout;
- /*
- * Some LLDs, like scsi, peek at the timeout to prevent
- * a command from being retried forever.
- */
+ /*
+ * Some LLDs, like scsi, peek at the timeout to prevent a
+ * command from being retried forever.
+ */
+ if (!req->timeout)
req->timeout = q->rq_timeout;
- }
+
+ req->deadline = jiffies + req->timeout;
list_add_tail(&req->timeout_list, &q->timeout_list);

/*
--
1.6.0.2

2009-03-13 05:04:48

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 04/14] block: cleanup REQ_SOFTBARRIER usages

Impact: cleanup

Low level drivers don't need to worry about REQ_SOFTBARRIER, either
does blk_insert_request(). Don't set it there. It doesn't cause
malfunction but is confusing. After this, REQ_SOFTBARRIER is used
only in elevator proper and for discard requests.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 1 -
drivers/ide/ide-disk.c | 1 -
drivers/ide/ide-ioctls.c | 1 -
3 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d7b2cc9..9e5f154 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -944,7 +944,6 @@ void blk_insert_request(struct request_queue *q, struct request *rq,
* barrier
*/
rq->cmd_type = REQ_TYPE_SPECIAL;
- rq->cmd_flags |= REQ_SOFTBARRIER;

rq->special = data;

diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
index 806760d..0cd287d 100644
--- a/drivers/ide/ide-disk.c
+++ b/drivers/ide/ide-disk.c
@@ -405,7 +405,6 @@ static void idedisk_prepare_flush(struct request_queue *q, struct request *rq)
task->data_phase = TASKFILE_NO_DATA;

rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
- rq->cmd_flags |= REQ_SOFTBARRIER;
rq->special = task;
}

diff --git a/drivers/ide/ide-ioctls.c b/drivers/ide/ide-ioctls.c
index 1be263e..d440fbb 100644
--- a/drivers/ide/ide-ioctls.c
+++ b/drivers/ide/ide-ioctls.c
@@ -229,7 +229,6 @@ static int generic_drive_reset(ide_drive_t *drive)
rq->cmd_type = REQ_TYPE_SPECIAL;
rq->cmd_len = 1;
rq->cmd[0] = REQ_DRIVE_RESET;
- rq->cmd_flags |= REQ_SOFTBARRIER;
if (blk_execute_rq(drive->queue, NULL, rq, 1))
ret = rq->errors;
blk_put_request(rq);
--
1.6.0.2

2009-03-13 05:05:38

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 06/14] block: reorder request completion functions

Impact: cleanup, code reorganization

Reorder request completion functions such that

* All request completion functions are located together.

* Functions which are used by only one caller is put right above the
caller.

* end_request() is put after other completion functions but before
blk_update_request().

This change is for completion function cleanup which will follow.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 144 ++++++++++++++++++++++++------------------------
include/linux/blkdev.h | 16 +++---
2 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9e5f154..fd9dec3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1674,6 +1674,35 @@ static void blk_account_io_done(struct request *req)
}

/**
+ * blk_rq_bytes - Returns bytes left to complete in the entire request
+ * @rq: the request being processed
+ **/
+unsigned int blk_rq_bytes(struct request *rq)
+{
+ if (blk_fs_request(rq))
+ return rq->hard_nr_sectors << 9;
+
+ return rq->data_len;
+}
+EXPORT_SYMBOL_GPL(blk_rq_bytes);
+
+/**
+ * blk_rq_cur_bytes - Returns bytes left to complete in the current segment
+ * @rq: the request being processed
+ **/
+unsigned int blk_rq_cur_bytes(struct request *rq)
+{
+ if (blk_fs_request(rq))
+ return rq->current_nr_sectors << 9;
+
+ if (rq->bio)
+ return rq->bio->bi_size;
+
+ return rq->data_len;
+}
+EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
+
+/**
* __end_that_request_first - end I/O on a request
* @req: the request being processed
* @error: %0 for success, < %0 for error
@@ -1783,6 +1812,22 @@ static int __end_that_request_first(struct request *req, int error,
return 1;
}

+static int end_that_request_data(struct request *rq, int error,
+ unsigned int nr_bytes, unsigned int bidi_bytes)
+{
+ if (rq->bio) {
+ if (__end_that_request_first(rq, error, nr_bytes))
+ return 1;
+
+ /* Bidi request must be completed as a whole */
+ if (blk_bidi_rq(rq) &&
+ __end_that_request_first(rq->next_rq, error, bidi_bytes))
+ return 1;
+ }
+
+ return 0;
+}
+
/*
* queue lock must be held
*/
@@ -1812,78 +1857,6 @@ static void end_that_request_last(struct request *req, int error)
}

/**
- * blk_rq_bytes - Returns bytes left to complete in the entire request
- * @rq: the request being processed
- **/
-unsigned int blk_rq_bytes(struct request *rq)
-{
- if (blk_fs_request(rq))
- return rq->hard_nr_sectors << 9;
-
- return rq->data_len;
-}
-EXPORT_SYMBOL_GPL(blk_rq_bytes);
-
-/**
- * blk_rq_cur_bytes - Returns bytes left to complete in the current segment
- * @rq: the request being processed
- **/
-unsigned int blk_rq_cur_bytes(struct request *rq)
-{
- if (blk_fs_request(rq))
- return rq->current_nr_sectors << 9;
-
- if (rq->bio)
- return rq->bio->bi_size;
-
- return rq->data_len;
-}
-EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
-
-/**
- * end_request - end I/O on the current segment of the request
- * @req: the request being processed
- * @uptodate: error value or %0/%1 uptodate flag
- *
- * Description:
- * Ends I/O on the current segment of a request. If that is the only
- * remaining segment, the request is also completed and freed.
- *
- * This is a remnant of how older block drivers handled I/O completions.
- * Modern drivers typically end I/O on the full request in one go, unless
- * they have a residual value to account for. For that case this function
- * isn't really useful, unless the residual just happens to be the
- * full current segment. In other words, don't use this function in new
- * code. Use blk_end_request() or __blk_end_request() to end a request.
- **/
-void end_request(struct request *req, int uptodate)
-{
- int error = 0;
-
- if (uptodate <= 0)
- error = uptodate ? uptodate : -EIO;
-
- __blk_end_request(req, error, req->hard_cur_sectors << 9);
-}
-EXPORT_SYMBOL(end_request);
-
-static int end_that_request_data(struct request *rq, int error,
- unsigned int nr_bytes, unsigned int bidi_bytes)
-{
- if (rq->bio) {
- if (__end_that_request_first(rq, error, nr_bytes))
- return 1;
-
- /* Bidi request must be completed as a whole */
- if (blk_bidi_rq(rq) &&
- __end_that_request_first(rq->next_rq, error, bidi_bytes))
- return 1;
- }
-
- return 0;
-}
-
-/**
* blk_end_io - Generic end_io function to complete a request.
* @rq: the request being processed
* @error: %0 for success, < %0 for error
@@ -1993,6 +1966,33 @@ int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
EXPORT_SYMBOL_GPL(blk_end_bidi_request);

/**
+ * end_request - end I/O on the current segment of the request
+ * @req: the request being processed
+ * @uptodate: error value or %0/%1 uptodate flag
+ *
+ * Description:
+ * Ends I/O on the current segment of a request. If that is the only
+ * remaining segment, the request is also completed and freed.
+ *
+ * This is a remnant of how older block drivers handled I/O completions.
+ * Modern drivers typically end I/O on the full request in one go, unless
+ * they have a residual value to account for. For that case this function
+ * isn't really useful, unless the residual just happens to be the
+ * full current segment. In other words, don't use this function in new
+ * code. Use blk_end_request() or __blk_end_request() to end a request.
+ **/
+void end_request(struct request *req, int uptodate)
+{
+ int error = 0;
+
+ if (uptodate <= 0)
+ error = uptodate ? uptodate : -EIO;
+
+ __blk_end_request(req, error, req->hard_cur_sectors << 9);
+}
+EXPORT_SYMBOL(end_request);
+
+/**
* blk_update_request - Special helper function for request stacking drivers
* @rq: the request being processed
* @error: %0 for success, < %0 for error
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4c05bb9..cdfac4f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -810,6 +810,14 @@ static inline void blk_run_address_space(struct address_space *mapping)
extern void blkdev_dequeue_request(struct request *req);

/*
+ * blk_end_request() takes bytes instead of sectors as a complete size.
+ * blk_rq_bytes() returns bytes left to complete in the entire request.
+ * blk_rq_cur_bytes() returns bytes left to complete in the current segment.
+ */
+extern unsigned int blk_rq_bytes(struct request *rq);
+extern unsigned int blk_rq_cur_bytes(struct request *rq);
+
+/*
* blk_end_request() and friends.
* __blk_end_request() and end_request() must be called with
* the request queue spinlock acquired.
@@ -836,14 +844,6 @@ extern void blk_update_request(struct request *rq, int error,
unsigned int nr_bytes);

/*
- * blk_end_request() takes bytes instead of sectors as a complete size.
- * blk_rq_bytes() returns bytes left to complete in the entire request.
- * blk_rq_cur_bytes() returns bytes left to complete in the current segment.
- */
-extern unsigned int blk_rq_bytes(struct request *rq);
-extern unsigned int blk_rq_cur_bytes(struct request *rq);
-
-/*
* Access functions for manipulating queue properties
*/
extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn,
--
1.6.0.2

2009-03-13 05:06:24

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 07/14] block: reorganize request fetching functions

Impact: code reorganization

elv_next_request() and elv_dequeue_request() are public block layer
interface than actual elevator implementation. They mostly deal with
how requests interact with block layer and low level drivers at the
beginning of rqeuest processing whereas __elv_next_request() is the
actual eleveator request fetching interface.

Move the two functions to blk-core.c. This prepares for further
interface cleanup.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 95 ++++++++++++++++++++++++++++++++++++++++
block/blk.h | 37 ++++++++++++++++
block/elevator.c | 128 ------------------------------------------------------
3 files changed, 132 insertions(+), 128 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index fd9dec3..0d97fbe 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1702,6 +1702,101 @@ unsigned int blk_rq_cur_bytes(struct request *rq)
}
EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);

+struct request *elv_next_request(struct request_queue *q)
+{
+ struct request *rq;
+ int ret;
+
+ while ((rq = __elv_next_request(q)) != NULL) {
+ if (!(rq->cmd_flags & REQ_STARTED)) {
+ /*
+ * This is the first time the device driver
+ * sees this request (possibly after
+ * requeueing). Notify IO scheduler.
+ */
+ if (blk_sorted_rq(rq))
+ elv_activate_rq(q, rq);
+
+ /*
+ * just mark as started even if we don't start
+ * it, a request that has been delayed should
+ * not be passed by new incoming requests
+ */
+ rq->cmd_flags |= REQ_STARTED;
+ trace_block_rq_issue(q, rq);
+ }
+
+ if (!q->boundary_rq || q->boundary_rq == rq) {
+ q->end_sector = rq_end_sector(rq);
+ q->boundary_rq = NULL;
+ }
+
+ if (rq->cmd_flags & REQ_DONTPREP)
+ break;
+
+ if (q->dma_drain_size && rq->data_len) {
+ /*
+ * make sure space for the drain appears we
+ * know we can do this because max_hw_segments
+ * has been adjusted to be one fewer than the
+ * device can handle
+ */
+ rq->nr_phys_segments++;
+ }
+
+ if (!q->prep_rq_fn)
+ break;
+
+ ret = q->prep_rq_fn(q, rq);
+ if (ret == BLKPREP_OK) {
+ break;
+ } else if (ret == BLKPREP_DEFER) {
+ /*
+ * the request may have been (partially) prepped.
+ * we need to keep this request in the front to
+ * avoid resource deadlock. REQ_STARTED will
+ * prevent other fs requests from passing this one.
+ */
+ if (q->dma_drain_size && rq->data_len &&
+ !(rq->cmd_flags & REQ_DONTPREP)) {
+ /*
+ * remove the space for the drain we added
+ * so that we don't add it again
+ */
+ --rq->nr_phys_segments;
+ }
+
+ rq = NULL;
+ break;
+ } else if (ret == BLKPREP_KILL) {
+ rq->cmd_flags |= REQ_QUIET;
+ __blk_end_request(rq, -EIO, blk_rq_bytes(rq));
+ } else {
+ printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
+ break;
+ }
+ }
+
+ return rq;
+}
+EXPORT_SYMBOL(elv_next_request);
+
+void elv_dequeue_request(struct request_queue *q, struct request *rq)
+{
+ BUG_ON(list_empty(&rq->queuelist));
+ BUG_ON(ELV_ON_HASH(rq));
+
+ list_del_init(&rq->queuelist);
+
+ /*
+ * the time frame between a request being removed from the lists
+ * and to it is freed is accounted as io that is in progress at
+ * the driver side.
+ */
+ if (blk_account_rq(rq))
+ q->in_flight++;
+}
+
/**
* __end_that_request_first - end I/O on a request
* @req: the request being processed
diff --git a/block/blk.h b/block/blk.h
index 0dce92c..3979fd1 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -43,6 +43,43 @@ static inline void blk_clear_rq_complete(struct request *rq)
clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags);
}

+/*
+ * Internal elevator interface
+ */
+#define ELV_ON_HASH(rq) (!hlist_unhashed(&(rq)->hash))
+
+static inline struct request *__elv_next_request(struct request_queue *q)
+{
+ struct request *rq;
+
+ while (1) {
+ while (!list_empty(&q->queue_head)) {
+ rq = list_entry_rq(q->queue_head.next);
+ if (blk_do_ordered(q, &rq))
+ return rq;
+ }
+
+ if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
+ return NULL;
+ }
+}
+
+static inline void elv_activate_rq(struct request_queue *q, struct request *rq)
+{
+ struct elevator_queue *e = q->elevator;
+
+ if (e->ops->elevator_activate_req_fn)
+ e->ops->elevator_activate_req_fn(q, rq);
+}
+
+static inline void elv_deactivate_rq(struct request_queue *q, struct request *rq)
+{
+ struct elevator_queue *e = q->elevator;
+
+ if (e->ops->elevator_deactivate_req_fn)
+ e->ops->elevator_deactivate_req_fn(q, rq);
+}
+
#ifdef CONFIG_FAIL_IO_TIMEOUT
int blk_should_fake_timeout(struct request_queue *);
ssize_t part_timeout_show(struct device *, struct device_attribute *, char *);
diff --git a/block/elevator.c b/block/elevator.c
index fca4436..fd17605 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -53,7 +53,6 @@ static const int elv_hash_shift = 6;
(hash_long(ELV_HASH_BLOCK((sec)), elv_hash_shift))
#define ELV_HASH_ENTRIES (1 << elv_hash_shift)
#define rq_hash_key(rq) ((rq)->sector + (rq)->nr_sectors)
-#define ELV_ON_HASH(rq) (!hlist_unhashed(&(rq)->hash))

DEFINE_TRACE(block_rq_insert);
DEFINE_TRACE(block_rq_issue);
@@ -310,22 +309,6 @@ void elevator_exit(struct elevator_queue *e)
}
EXPORT_SYMBOL(elevator_exit);

-static void elv_activate_rq(struct request_queue *q, struct request *rq)
-{
- struct elevator_queue *e = q->elevator;
-
- if (e->ops->elevator_activate_req_fn)
- e->ops->elevator_activate_req_fn(q, rq);
-}
-
-static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
-{
- struct elevator_queue *e = q->elevator;
-
- if (e->ops->elevator_deactivate_req_fn)
- e->ops->elevator_deactivate_req_fn(q, rq);
-}
-
static inline void __elv_rqhash_del(struct request *rq)
{
hlist_del_init(&rq->hash);
@@ -733,117 +716,6 @@ void elv_add_request(struct request_queue *q, struct request *rq, int where,
}
EXPORT_SYMBOL(elv_add_request);

-static inline struct request *__elv_next_request(struct request_queue *q)
-{
- struct request *rq;
-
- while (1) {
- while (!list_empty(&q->queue_head)) {
- rq = list_entry_rq(q->queue_head.next);
- if (blk_do_ordered(q, &rq))
- return rq;
- }
-
- if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
- return NULL;
- }
-}
-
-struct request *elv_next_request(struct request_queue *q)
-{
- struct request *rq;
- int ret;
-
- while ((rq = __elv_next_request(q)) != NULL) {
- if (!(rq->cmd_flags & REQ_STARTED)) {
- /*
- * This is the first time the device driver
- * sees this request (possibly after
- * requeueing). Notify IO scheduler.
- */
- if (blk_sorted_rq(rq))
- elv_activate_rq(q, rq);
-
- /*
- * just mark as started even if we don't start
- * it, a request that has been delayed should
- * not be passed by new incoming requests
- */
- rq->cmd_flags |= REQ_STARTED;
- trace_block_rq_issue(q, rq);
- }
-
- if (!q->boundary_rq || q->boundary_rq == rq) {
- q->end_sector = rq_end_sector(rq);
- q->boundary_rq = NULL;
- }
-
- if (rq->cmd_flags & REQ_DONTPREP)
- break;
-
- if (q->dma_drain_size && rq->data_len) {
- /*
- * make sure space for the drain appears we
- * know we can do this because max_hw_segments
- * has been adjusted to be one fewer than the
- * device can handle
- */
- rq->nr_phys_segments++;
- }
-
- if (!q->prep_rq_fn)
- break;
-
- ret = q->prep_rq_fn(q, rq);
- if (ret == BLKPREP_OK) {
- break;
- } else if (ret == BLKPREP_DEFER) {
- /*
- * the request may have been (partially) prepped.
- * we need to keep this request in the front to
- * avoid resource deadlock. REQ_STARTED will
- * prevent other fs requests from passing this one.
- */
- if (q->dma_drain_size && rq->data_len &&
- !(rq->cmd_flags & REQ_DONTPREP)) {
- /*
- * remove the space for the drain we added
- * so that we don't add it again
- */
- --rq->nr_phys_segments;
- }
-
- rq = NULL;
- break;
- } else if (ret == BLKPREP_KILL) {
- rq->cmd_flags |= REQ_QUIET;
- __blk_end_request(rq, -EIO, blk_rq_bytes(rq));
- } else {
- printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
- break;
- }
- }
-
- return rq;
-}
-EXPORT_SYMBOL(elv_next_request);
-
-void elv_dequeue_request(struct request_queue *q, struct request *rq)
-{
- BUG_ON(list_empty(&rq->queuelist));
- BUG_ON(ELV_ON_HASH(rq));
-
- list_del_init(&rq->queuelist);
-
- /*
- * the time frame between a request being removed from the lists
- * and to it is freed is accounted as io that is in progress at
- * the driver side.
- */
- if (blk_account_rq(rq))
- q->in_flight++;
-}
-
int elv_queue_empty(struct request_queue *q)
{
struct elevator_queue *e = q->elevator;
--
1.6.0.2

2009-03-13 05:05:56

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 08/14] block: kill blk_end_request_callback()

Impact: removal of unnecessary convoluted interface

The only user of blk_end_request_callback() is cdrom_newpc_intr()
which calls it with a dummy callback to update data completion without
actually completing the request, for which blk_update_request() can be
used.

Use blk_update_request() in cdrom_newpc_intr() and kill
blk_end_request_callback().

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 48 +++---------------------------------------------
drivers/ide/ide-cd.c | 16 ++--------------
include/linux/blkdev.h | 3 ---
3 files changed, 5 insertions(+), 62 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 0d97fbe..9595c4f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1957,10 +1957,6 @@ static void end_that_request_last(struct request *req, int error)
* @error: %0 for success, < %0 for error
* @nr_bytes: number of bytes to complete @rq
* @bidi_bytes: number of bytes to complete @rq->next_rq
- * @drv_callback: function called between completion of bios in the request
- * and completion of the request.
- * If the callback returns non %0, this helper returns without
- * completion of the request.
*
* Description:
* Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
@@ -1971,8 +1967,7 @@ static void end_that_request_last(struct request *req, int error)
* %1 - this request is not freed yet, it still has pending buffers.
**/
static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
- unsigned int bidi_bytes,
- int (drv_callback)(struct request *))
+ unsigned int bidi_bytes)
{
struct request_queue *q = rq->q;
unsigned long flags = 0UL;
@@ -1980,10 +1975,6 @@ static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
if (end_that_request_data(rq, error, nr_bytes, bidi_bytes))
return 1;

- /* Special feature for tricky drivers */
- if (drv_callback && drv_callback(rq))
- return 1;
-
add_disk_randomness(rq->rq_disk);

spin_lock_irqsave(q->queue_lock, flags);
@@ -2009,7 +2000,7 @@ static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
**/
int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
{
- return blk_end_io(rq, error, nr_bytes, 0, NULL);
+ return blk_end_io(rq, error, nr_bytes, 0);
}
EXPORT_SYMBOL_GPL(blk_end_request);

@@ -2056,7 +2047,7 @@ EXPORT_SYMBOL_GPL(__blk_end_request);
int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
unsigned int bidi_bytes)
{
- return blk_end_io(rq, error, nr_bytes, bidi_bytes, NULL);
+ return blk_end_io(rq, error, nr_bytes, bidi_bytes);
}
EXPORT_SYMBOL_GPL(blk_end_bidi_request);

@@ -2117,39 +2108,6 @@ void blk_update_request(struct request *rq, int error, unsigned int nr_bytes)
}
EXPORT_SYMBOL_GPL(blk_update_request);

-/**
- * blk_end_request_callback - Special helper function for tricky drivers
- * @rq: the request being processed
- * @error: %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
- * @drv_callback: function called between completion of bios in the request
- * and completion of the request.
- * If the callback returns non %0, this helper returns without
- * completion of the request.
- *
- * Description:
- * Ends I/O on a number of bytes attached to @rq.
- * If @rq has leftover, sets it up for the next range of segments.
- *
- * This special helper function is used only for existing tricky drivers.
- * (e.g. cdrom_newpc_intr() of ide-cd)
- * This interface will be removed when such drivers are rewritten.
- * Don't use this interface in other places anymore.
- *
- * Return:
- * %0 - we are done with this request
- * %1 - this request is not freed yet.
- * this request still has pending buffers or
- * the driver doesn't want to finish this request yet.
- **/
-int blk_end_request_callback(struct request *rq, int error,
- unsigned int nr_bytes,
- int (drv_callback)(struct request *))
-{
- return blk_end_io(rq, error, nr_bytes, 0, drv_callback);
-}
-EXPORT_SYMBOL_GPL(blk_end_request_callback);
-
void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
struct bio *bio)
{
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index ddfbea4..f825d50 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -748,16 +748,6 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
return (flags & REQ_FAILED) ? -EIO : 0;
}

-/*
- * Called from blk_end_request_callback() after the data of the request is
- * completed and before the request itself is completed. By returning value '1',
- * blk_end_request_callback() returns immediately without completing it.
- */
-static int cdrom_newpc_intr_dummy_cb(struct request *rq)
-{
- return 1;
-}
-
static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
@@ -932,12 +922,10 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
/*
* The request can't be completed until DRQ is cleared.
* So complete the data, but don't complete the request
- * using the dummy function for the callback feature
- * of blk_end_request_callback().
+ * using blk_update_request().
*/
if (rq->bio)
- blk_end_request_callback(rq, 0, blen,
- cdrom_newpc_intr_dummy_cb);
+ blk_update_request(rq, 0, blen);
else
rq->data += blen;
}
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cdfac4f..e8175c8 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -833,9 +833,6 @@ extern int __blk_end_request(struct request *rq, int error,
extern int blk_end_bidi_request(struct request *rq, int error,
unsigned int nr_bytes, unsigned int bidi_bytes);
extern void end_request(struct request *, int);
-extern int blk_end_request_callback(struct request *rq, int error,
- unsigned int nr_bytes,
- int (drv_callback)(struct request *));
extern void blk_complete_request(struct request *);
extern void __blk_complete_request(struct request *);
extern void blk_abort_request(struct request *);
--
1.6.0.2

2009-03-13 05:06:40

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 10/14] block: move rq->start_time initialization to blk_rq_init()

Impact: rq->start_time is valid for all requests

rq->start_time was initialized in init_request_from_bio() so special
requests didn't have start_time set. This has been okay as start_time
has been used only for fs requests; however, there is no indication of
this actually is the case or not. Set rq->start_time in blk_rq_init()
and guarantee that all initialized rq's have its start_time set. This
improves consistency at virtually no cost and future changes will make
use of the timestamp for !bio requests.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index b1781dd..7d0ab48 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -134,6 +134,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
rq->cmd = rq->__cmd;
rq->tag = -1;
rq->ref_count = 1;
+ rq->start_time = jiffies;
}
EXPORT_SYMBOL(blk_rq_init);

@@ -1094,7 +1095,6 @@ void init_request_from_bio(struct request *req, struct bio *bio)
req->errors = 0;
req->hard_sector = req->sector = bio->bi_sector;
req->ioprio = bio_prio(bio);
- req->start_time = jiffies;
blk_rq_bio_prep(req->q, req, bio);
}

--
1.6.0.2

2009-03-13 05:07:26

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 13/14] ubd: simplify block request completion

Impact: cleanup

ubd had its own block request partial completion mechanism, which is
unnecessary as block layer already does it. Kill ubd_end_request()
and ubd_finish() and replace them with direct call to
blk_end_request().

Signed-off-by: Tejun Heo <[email protected]>
Cc: Jeff Dike <[email protected]>
---
arch/um/drivers/ubd_kern.c | 23 +----------------------
1 files changed, 1 insertions(+), 22 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 0a86811..906ecdf 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -451,23 +451,6 @@ static void do_ubd_request(struct request_queue * q);

/* Only changed by ubd_init, which is an initcall. */
static int thread_fd = -1;
-
-static void ubd_end_request(struct request *req, int bytes, int error)
-{
- blk_end_request(req, error, bytes);
-}
-
-/* Callable only from interrupt context - otherwise you need to do
- * spin_lock_irq()/spin_lock_irqsave() */
-static inline void ubd_finish(struct request *req, int bytes)
-{
- if(bytes < 0){
- ubd_end_request(req, 0, -EIO);
- return;
- }
- ubd_end_request(req, bytes, 0);
-}
-
static LIST_HEAD(restart);

/* XXX - move this inside ubd_intr. */
@@ -475,7 +458,6 @@ static LIST_HEAD(restart);
static void ubd_handler(void)
{
struct io_thread_req *req;
- struct request *rq;
struct ubd *ubd;
struct list_head *list, *next_ele;
unsigned long flags;
@@ -492,10 +474,7 @@ static void ubd_handler(void)
return;
}

- rq = req->req;
- rq->nr_sectors -= req->length >> 9;
- if(rq->nr_sectors == 0)
- ubd_finish(rq, rq->hard_nr_sectors << 9);
+ blk_end_request(req->req, 0, req->length);
kfree(req);
}
reactivate_fd(thread_fd, UBD_IRQ);
--
1.6.0.2

2009-03-13 05:06:57

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 09/14] block: clean up request completion API

Impact: cleanup, rq->*nr_sectors always updated after req completion

Request completion has gone through several changes and became a bit
messy over the time. Clean it up.

1. end_that_request_data() is a thin wrapper around
end_that_request_data_first() which checks whether bio is NULL
before doing anything and handles bidi completion.
blk_update_request() is a thin wrapper around
end_that_request_data() which clears nr_sectors on the last
iteration but doesn't use the bidi completion.

Clean it up by moving the initial bio NULL check and nr_sectors
clearing on the last iteration into end_that_request_data() and
renaming it to blk_update_request(), which makes blk_end_io() the
only user of end_that_request_data(). Collapse
end_that_request_data() into blk_end_io().

2. There are four visible completion variants - blk_end_request(),
__blk_end_request(), blk_end_bidi_request() and end_request().
blk_end_request() and blk_end_bidi_request() uses blk_end_request()
as the backend but __blk_end_request() and end_request() use
separate implementation in __blk_end_request() due to different
locking rules.

Make blk_end_io() handle both cases thus all four public completion
functions are thin wrappers around blk_end_io(). Rename
blk_end_io() to __blk_end_io() and export it and inline all public
completion functions.

3. As the whole request issue/completion usages are about to be
modified and audited, it's a good chance to convert completion
functions return bool which better indicates the intended meaning
of return values.

4. The function name end_that_request_last() is from the days when it
was a public interface and slighly confusing. Give it a proper
internal name - finish_request().

The only visible behavior change is from #1. nr_sectors counts are
cleared after the final iteration no matter which function is used to
complete the request. I couldn't find any place where the code
assumes those nr_sectors counters contain the values for the last
segment and this change is good as it makes the API much more
consistent as the end result is now same whether a request is
completed using [__]blk_end_request() alone or in combination with
blk_update_request().

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 215 ++++++++++++------------------------------------
include/linux/blkdev.h | 114 +++++++++++++++++++++++---
2 files changed, 154 insertions(+), 175 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9595c4f..b1781dd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1798,25 +1798,35 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
}

/**
- * __end_that_request_first - end I/O on a request
- * @req: the request being processed
+ * blk_update_request - Special helper function for request stacking drivers
+ * @rq: the request being processed
* @error: %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
+ * @nr_bytes: number of bytes to complete @rq
*
* Description:
- * Ends I/O on a number of bytes attached to @req, and sets it up
- * for the next range of segments (if any) in the cluster.
+ * Ends I/O on a number of bytes attached to @rq, but doesn't complete
+ * the request structure even if @rq doesn't have leftover.
+ * If @rq has leftover, sets it up for the next range of segments.
+ *
+ * This special helper function is only for request stacking drivers
+ * (e.g. request-based dm) so that they can handle partial completion.
+ * Actual device drivers should use blk_end_request instead.
+ *
+ * Passing the result of blk_rq_bytes() as @nr_bytes guarantees
+ * %false return from this function.
*
* Return:
- * %0 - we are done with this request, call end_that_request_last()
- * %1 - still buffers pending for this request
+ * %false - this request doesn't have any more data
+ * %true - this request has more data
**/
-static int __end_that_request_first(struct request *req, int error,
- int nr_bytes)
+bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
{
int total_bytes, bio_nbytes, next_idx = 0;
struct bio *bio;

+ if (!req->bio)
+ return false;
+
trace_block_rq_complete(req->q, req);

/*
@@ -1889,8 +1899,16 @@ static int __end_that_request_first(struct request *req, int error,
/*
* completely done
*/
- if (!req->bio)
- return 0;
+ if (!req->bio) {
+ /*
+ * Reset counters so that the request stacking driver
+ * can find how many bytes remain in the request
+ * later.
+ */
+ req->nr_sectors = req->hard_nr_sectors = 0;
+ req->current_nr_sectors = req->hard_cur_sectors = 0;
+ return false;
+ }

/*
* if the request wasn't completed, update state
@@ -1904,29 +1922,14 @@ static int __end_that_request_first(struct request *req, int error,

blk_recalc_rq_sectors(req, total_bytes >> 9);
blk_recalc_rq_segments(req);
- return 1;
-}
-
-static int end_that_request_data(struct request *rq, int error,
- unsigned int nr_bytes, unsigned int bidi_bytes)
-{
- if (rq->bio) {
- if (__end_that_request_first(rq, error, nr_bytes))
- return 1;
-
- /* Bidi request must be completed as a whole */
- if (blk_bidi_rq(rq) &&
- __end_that_request_first(rq->next_rq, error, bidi_bytes))
- return 1;
- }
-
- return 0;
+ return true;
}
+EXPORT_SYMBOL_GPL(blk_update_request);

/*
* queue lock must be held
*/
-static void end_that_request_last(struct request *req, int error)
+static void finish_request(struct request *req, int error)
{
if (blk_rq_tagged(req))
blk_queue_end_tag(req->q, req);
@@ -1952,161 +1955,47 @@ static void end_that_request_last(struct request *req, int error)
}

/**
- * blk_end_io - Generic end_io function to complete a request.
+ * __blk_end_io - Generic end_io function to complete a request.
* @rq: the request being processed
* @error: %0 for success, < %0 for error
* @nr_bytes: number of bytes to complete @rq
* @bidi_bytes: number of bytes to complete @rq->next_rq
+ * @locked: whether rq->q->queue_lock is held on entry
*
* Description:
* Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
* If @rq has leftover, sets it up for the next range of segments.
*
* Return:
- * %0 - we are done with this request
- * %1 - this request is not freed yet, it still has pending buffers.
+ * %false - we are done with this request
+ * %true - this request is not freed yet, it still has pending buffers.
**/
-static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
- unsigned int bidi_bytes)
+bool __blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
+ unsigned int bidi_bytes, bool locked)
{
struct request_queue *q = rq->q;
unsigned long flags = 0UL;

- if (end_that_request_data(rq, error, nr_bytes, bidi_bytes))
- return 1;
-
- add_disk_randomness(rq->rq_disk);
-
- spin_lock_irqsave(q->queue_lock, flags);
- end_that_request_last(rq, error);
- spin_unlock_irqrestore(q->queue_lock, flags);
-
- return 0;
-}
+ if (blk_update_request(rq, error, nr_bytes))
+ return true;

-/**
- * blk_end_request - Helper function for drivers to complete the request.
- * @rq: the request being processed
- * @error: %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
- *
- * Description:
- * Ends I/O on a number of bytes attached to @rq.
- * If @rq has leftover, sets it up for the next range of segments.
- *
- * Return:
- * %0 - we are done with this request
- * %1 - still buffers pending for this request
- **/
-int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
-{
- return blk_end_io(rq, error, nr_bytes, 0);
-}
-EXPORT_SYMBOL_GPL(blk_end_request);
-
-/**
- * __blk_end_request - Helper function for drivers to complete the request.
- * @rq: the request being processed
- * @error: %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
- *
- * Description:
- * Must be called with queue lock held unlike blk_end_request().
- *
- * Return:
- * %0 - we are done with this request
- * %1 - still buffers pending for this request
- **/
-int __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
-{
- if (rq->bio && __end_that_request_first(rq, error, nr_bytes))
- return 1;
+ /* Bidi request must be completed as a whole */
+ if (unlikely(blk_bidi_rq(rq)) &&
+ blk_update_request(rq->next_rq, error, bidi_bytes))
+ return true;

add_disk_randomness(rq->rq_disk);

- end_that_request_last(rq, error);
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(__blk_end_request);
-
-/**
- * blk_end_bidi_request - Helper function for drivers to complete bidi request.
- * @rq: the bidi request being processed
- * @error: %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete @rq
- * @bidi_bytes: number of bytes to complete @rq->next_rq
- *
- * Description:
- * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
- *
- * Return:
- * %0 - we are done with this request
- * %1 - still buffers pending for this request
- **/
-int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
- unsigned int bidi_bytes)
-{
- return blk_end_io(rq, error, nr_bytes, bidi_bytes);
-}
-EXPORT_SYMBOL_GPL(blk_end_bidi_request);
-
-/**
- * end_request - end I/O on the current segment of the request
- * @req: the request being processed
- * @uptodate: error value or %0/%1 uptodate flag
- *
- * Description:
- * Ends I/O on the current segment of a request. If that is the only
- * remaining segment, the request is also completed and freed.
- *
- * This is a remnant of how older block drivers handled I/O completions.
- * Modern drivers typically end I/O on the full request in one go, unless
- * they have a residual value to account for. For that case this function
- * isn't really useful, unless the residual just happens to be the
- * full current segment. In other words, don't use this function in new
- * code. Use blk_end_request() or __blk_end_request() to end a request.
- **/
-void end_request(struct request *req, int uptodate)
-{
- int error = 0;
-
- if (uptodate <= 0)
- error = uptodate ? uptodate : -EIO;
-
- __blk_end_request(req, error, req->hard_cur_sectors << 9);
-}
-EXPORT_SYMBOL(end_request);
+ if (!locked) {
+ spin_lock_irqsave(q->queue_lock, flags);
+ finish_request(rq, error);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ } else
+ finish_request(rq, error);

-/**
- * blk_update_request - Special helper function for request stacking drivers
- * @rq: the request being processed
- * @error: %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete @rq
- *
- * Description:
- * Ends I/O on a number of bytes attached to @rq, but doesn't complete
- * the request structure even if @rq doesn't have leftover.
- * If @rq has leftover, sets it up for the next range of segments.
- *
- * This special helper function is only for request stacking drivers
- * (e.g. request-based dm) so that they can handle partial completion.
- * Actual device drivers should use blk_end_request instead.
- */
-void blk_update_request(struct request *rq, int error, unsigned int nr_bytes)
-{
- if (!end_that_request_data(rq, error, nr_bytes, 0)) {
- /*
- * These members are not updated in end_that_request_data()
- * when all bios are completed.
- * Update them so that the request stacking driver can find
- * how many bytes remain in the request later.
- */
- rq->nr_sectors = rq->hard_nr_sectors = 0;
- rq->current_nr_sectors = rq->hard_cur_sectors = 0;
- }
+ return false;
}
-EXPORT_SYMBOL_GPL(blk_update_request);
+EXPORT_SYMBOL_GPL(__blk_end_io);

void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
struct bio *bio)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index e8175c8..cb2f9ae 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -818,27 +818,117 @@ extern unsigned int blk_rq_bytes(struct request *rq);
extern unsigned int blk_rq_cur_bytes(struct request *rq);

/*
- * blk_end_request() and friends.
- * __blk_end_request() and end_request() must be called with
- * the request queue spinlock acquired.
+ * Request completion related functions.
+ *
+ * blk_update_request() completes given number of bytes and updates
+ * the request without completing it.
+ *
+ * blk_end_request() and friends. __blk_end_request() and
+ * end_request() must be called with the request queue spinlock
+ * acquired.
*
* Several drivers define their own end_request and call
* blk_end_request() for parts of the original function.
* This prevents code duplication in drivers.
*/
-extern int blk_end_request(struct request *rq, int error,
- unsigned int nr_bytes);
-extern int __blk_end_request(struct request *rq, int error,
- unsigned int nr_bytes);
-extern int blk_end_bidi_request(struct request *rq, int error,
- unsigned int nr_bytes, unsigned int bidi_bytes);
-extern void end_request(struct request *, int);
+extern bool blk_update_request(struct request *rq, int error,
+ unsigned int nr_bytes);
+
+/* internal function, subject to change, don't ever use directly */
+extern bool __blk_end_io(struct request *rq, int error,
+ unsigned int nr_bytes, unsigned int bidi_bytes,
+ bool locked);
+
+/**
+ * blk_end_request - Helper function for drivers to complete the request.
+ * @rq: the request being processed
+ * @error: %0 for success, < %0 for error
+ * @nr_bytes: number of bytes to complete
+ *
+ * Description:
+ * Ends I/O on a number of bytes attached to @rq.
+ * If @rq has leftover, sets it up for the next range of segments.
+ *
+ * Return:
+ * %false - we are done with this request
+ * %true - still buffers pending for this request
+ **/
+static inline bool blk_end_request(struct request *rq, int error,
+ unsigned int nr_bytes)
+{
+ return __blk_end_io(rq, error, nr_bytes, 0, false);
+}
+
+/**
+ * __blk_end_request - Helper function for drivers to complete the request.
+ * @rq: the request being processed
+ * @error: %0 for success, < %0 for error
+ * @nr_bytes: number of bytes to complete
+ *
+ * Description:
+ * Must be called with queue lock held unlike blk_end_request().
+ *
+ * Return:
+ * %false - we are done with this request
+ * %true - still buffers pending for this request
+ **/
+static inline bool __blk_end_request(struct request *rq, int error,
+ unsigned int nr_bytes)
+{
+ return __blk_end_io(rq, error, nr_bytes, 0, true);
+}
+
+/**
+ * blk_end_bidi_request - Helper function for drivers to complete bidi request.
+ * @rq: the bidi request being processed
+ * @error: %0 for success, < %0 for error
+ * @nr_bytes: number of bytes to complete @rq
+ * @bidi_bytes: number of bytes to complete @rq->next_rq
+ *
+ * Description:
+ * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
+ *
+ * Return:
+ * %false - we are done with this request
+ * %true - still buffers pending for this request
+ **/
+static inline bool blk_end_bidi_request(struct request *rq, int error,
+ unsigned int nr_bytes,
+ unsigned int bidi_bytes)
+{
+ return __blk_end_io(rq, error, nr_bytes, bidi_bytes, false);
+}
+
+/**
+ * end_request - end I/O on the current segment of the request
+ * @rq: the request being processed
+ * @uptodate: error value or %0/%1 uptodate flag
+ *
+ * Description:
+ * Ends I/O on the current segment of a request. If that is the only
+ * remaining segment, the request is also completed and freed.
+ *
+ * This is a remnant of how older block drivers handled I/O completions.
+ * Modern drivers typically end I/O on the full request in one go, unless
+ * they have a residual value to account for. For that case this function
+ * isn't really useful, unless the residual just happens to be the
+ * full current segment. In other words, don't use this function in new
+ * code. Use blk_end_request() or __blk_end_request() to end a request.
+ **/
+static inline void end_request(struct request *rq, int uptodate)
+{
+ int error = 0;
+
+ if (uptodate <= 0)
+ error = uptodate ? uptodate : -EIO;
+
+ __blk_end_io(rq, error, rq->hard_cur_sectors << 9, 0, true);
+}
+
extern void blk_complete_request(struct request *);
extern void __blk_complete_request(struct request *);
extern void blk_abort_request(struct request *);
extern void blk_abort_queue(struct request_queue *);
-extern void blk_update_request(struct request *rq, int error,
- unsigned int nr_bytes);

/*
* Access functions for manipulating queue properties
--
1.6.0.2

2009-03-13 05:07:47

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

Impact: cleanup

There are many [__]blk_end_request() call sites which call it with
full request length and expect full completion. Many of them ensure
that the request actually completes by doing BUG_ON() the return
value, which is awkward and error-prone.

This patch adds [__]blk_end_request_all() which takes @rq and @error
and fully completes the request. BUG_ON() is added to
blk_update_request() to ensure that this actually happens.

Most conversions are simple but there are a few noteworthy ones.

* cdrom/viocd: viocd_end_request() replaced with direct calls to
__blk_end_request_all().

* s390/block/dasd: dasd_end_request() replaced with direct calls to
__blk_end_request_all().

* s390/char/tape_block: tapeblock_end_request() replaced with direct
calls to blk_end_request_all().

Signed-off-by: Tejun Heo <[email protected]>
Cc: Bartlomiej Zolnierkiewicz <[email protected]>
Cc: Russell King <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Mike Miller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Jeff Garzik <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Alex Dubov <[email protected]>
Cc: James Bottomley <[email protected]>
---
arch/arm/plat-omap/mailbox.c | 11 +++--------
block/blk-barrier.c | 9 ++-------
block/blk-core.c | 2 +-
block/elevator.c | 2 +-
drivers/block/cciss.c | 3 +--
drivers/block/cpqarray.c | 3 +--
drivers/block/sx8.c | 3 +--
drivers/block/virtio_blk.c | 2 +-
drivers/block/xen-blkfront.c | 4 +---
drivers/cdrom/gdrom.c | 2 +-
drivers/cdrom/viocd.c | 25 ++++---------------------
drivers/ide/ide-cd.c | 14 +++-----------
drivers/ide/ide-io.c | 4 +---
drivers/ide/ide-pm.c | 3 +--
drivers/memstick/core/mspro_block.c | 2 +-
drivers/s390/block/dasd.c | 17 ++++-------------
drivers/s390/char/tape_block.c | 15 ++++-----------
drivers/scsi/scsi_lib.c | 2 +-
include/linux/blkdev.h | 32 ++++++++++++++++++++++++++++++++
19 files changed, 64 insertions(+), 91 deletions(-)

diff --git a/arch/arm/plat-omap/mailbox.c b/arch/arm/plat-omap/mailbox.c
index b52ce05..bb83e84 100644
--- a/arch/arm/plat-omap/mailbox.c
+++ b/arch/arm/plat-omap/mailbox.c
@@ -116,8 +116,7 @@ static void mbox_tx_work(struct work_struct *work)
}

spin_lock(q->queue_lock);
- if (__blk_end_request(rq, 0, 0))
- BUG();
+ __blk_end_request_all(rq, 0);
spin_unlock(q->queue_lock);
}
}
@@ -148,10 +147,7 @@ static void mbox_rx_work(struct work_struct *work)
break;

msg = (mbox_msg_t) rq->data;
-
- if (blk_end_request(rq, 0, 0))
- BUG();
-
+ blk_end_request_all(rq, 0);
mbox->rxq->callback((void *)msg);
}
}
@@ -261,8 +257,7 @@ omap_mbox_read(struct device *dev, struct device_attribute *attr, char *buf)

*p = (mbox_msg_t) rq->data;

- if (blk_end_request(rq, 0, 0))
- BUG();
+ blk_end_request_all(rq, 0);

if (unlikely(mbox_seq_test(mbox, *p))) {
pr_info("mbox: Illegal seq bit!(%08x) ignored\n", *p);
diff --git a/block/blk-barrier.c b/block/blk-barrier.c
index f7dae57..bac1de1 100644
--- a/block/blk-barrier.c
+++ b/block/blk-barrier.c
@@ -106,10 +106,7 @@ bool blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
*/
q->ordseq = 0;
rq = q->orig_bar_rq;
-
- if (__blk_end_request(rq, q->orderr, blk_rq_bytes(rq)))
- BUG();
-
+ __blk_end_request_all(rq, q->orderr);
return true;
}

@@ -252,9 +249,7 @@ bool blk_do_ordered(struct request_queue *q, struct request **rqp)
* with prejudice.
*/
elv_dequeue_request(q, rq);
- if (__blk_end_request(rq, -EOPNOTSUPP,
- blk_rq_bytes(rq)))
- BUG();
+ __blk_end_request_all(rq, -EOPNOTSUPP);
*rqp = NULL;
return false;
}
diff --git a/block/blk-core.c b/block/blk-core.c
index 7d0ab48..f9118c0 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1770,7 +1770,7 @@ struct request *elv_next_request(struct request_queue *q)
break;
} else if (ret == BLKPREP_KILL) {
rq->cmd_flags |= REQ_QUIET;
- __blk_end_request(rq, -EIO, blk_rq_bytes(rq));
+ __blk_end_request_all(rq, -EIO);
} else {
printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
break;
diff --git a/block/elevator.c b/block/elevator.c
index fd17605..54d01b8 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -785,7 +785,7 @@ void elv_abort_queue(struct request_queue *q)
rq = list_entry_rq(q->queue_head.next);
rq->cmd_flags |= REQ_QUIET;
trace_block_rq_abort(q, rq);
- __blk_end_request(rq, -EIO, blk_rq_bytes(rq));
+ __blk_end_request_all(rq, -EIO);
}
}
EXPORT_SYMBOL(elv_abort_queue);
diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
index 5d0e135..b78339f 100644
--- a/drivers/block/cciss.c
+++ b/drivers/block/cciss.c
@@ -1308,8 +1308,7 @@ static void cciss_softirq_done(struct request *rq)
printk("Done with %p\n", rq);
#endif /* CCISS_DEBUG */

- if (blk_end_request(rq, (rq->errors == 0) ? 0 : -EIO, blk_rq_bytes(rq)))
- BUG();
+ blk_end_request_all(rq, (rq->errors == 0) ? 0 : -EIO);

spin_lock_irqsave(&h->lock, flags);
cmd_free(h, cmd, 1);
diff --git a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c
index 5d39df1..473af67 100644
--- a/drivers/block/cpqarray.c
+++ b/drivers/block/cpqarray.c
@@ -1023,8 +1023,7 @@ static inline void complete_command(cmdlist_t *cmd, int timeout)
cmd->req.sg[i].size, ddir);

DBGPX(printk("Done with %p\n", rq););
- if (__blk_end_request(rq, error, blk_rq_bytes(rq)))
- BUG();
+ __blk_end_request_all(rq, error);
}

/*
diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
index a18e1ca..3ba4437 100644
--- a/drivers/block/sx8.c
+++ b/drivers/block/sx8.c
@@ -749,8 +749,7 @@ static inline void carm_end_request_queued(struct carm_host *host,
struct request *req = crq->rq;
int rc;

- rc = __blk_end_request(req, error, blk_rq_bytes(req));
- assert(rc == 0);
+ __blk_end_request_all(req, error);

rc = carm_put_request(host, crq);
assert(rc == 0);
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 5d34764..50745e6 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -62,7 +62,7 @@ static void blk_done(struct virtqueue *vq)
break;
}

- __blk_end_request(vbr->req, error, blk_rq_bytes(vbr->req));
+ __blk_end_request_all(vbr->req, error);
list_del(&vbr->list);
mempool_free(vbr, vblk->pool);
}
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8f90508..cd6cfe3 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -551,7 +551,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)

for (i = info->ring.rsp_cons; i != rp; i++) {
unsigned long id;
- int ret;

bret = RING_GET_RESPONSE(&info->ring, i);
id = bret->id;
@@ -578,8 +577,7 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
"request: %x\n", bret->status);

- ret = __blk_end_request(req, error, blk_rq_bytes(req));
- BUG_ON(ret);
+ __blk_end_request_all(req, error);
break;
default:
BUG();
diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index 2eecb77..fee9a9e 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -632,7 +632,7 @@ static void gdrom_readdisk_dma(struct work_struct *work)
* before handling ending the request */
spin_lock(&gdrom_lock);
list_del_init(&req->queuelist);
- __blk_end_request(req, err, blk_rq_bytes(req));
+ __blk_end_request_all(req, err);
}
spin_unlock(&gdrom_lock);
kfree(read_command);
diff --git a/drivers/cdrom/viocd.c b/drivers/cdrom/viocd.c
index 1392935..cc3efa0 100644
--- a/drivers/cdrom/viocd.c
+++ b/drivers/cdrom/viocd.c
@@ -291,23 +291,6 @@ static int send_request(struct request *req)
return 0;
}

-static void viocd_end_request(struct request *req, int error)
-{
- int nsectors = req->hard_nr_sectors;
-
- /*
- * Make sure it's fully ended, and ensure that we process
- * at least one sector.
- */
- if (blk_pc_request(req))
- nsectors = (req->data_len + 511) >> 9;
- if (!nsectors)
- nsectors = 1;
-
- if (__blk_end_request(req, error, nsectors << 9))
- BUG();
-}
-
static int rwreq;

static void do_viocd_request(struct request_queue *q)
@@ -316,11 +299,11 @@ static void do_viocd_request(struct request_queue *q)

while ((rwreq == 0) && ((req = elv_next_request(q)) != NULL)) {
if (!blk_fs_request(req))
- viocd_end_request(req, -EIO);
+ __blk_end_request_all(req, -EIO);
else if (send_request(req) < 0) {
printk(VIOCD_KERN_WARNING
"unable to send message to OS/400!");
- viocd_end_request(req, -EIO);
+ __blk_end_request_all(req, -EIO);
} else
rwreq++;
}
@@ -531,9 +514,9 @@ return_complete:
"with rc %d:0x%04X: %s\n",
req, event->xRc,
bevent->sub_result, err->msg);
- viocd_end_request(req, -EIO);
+ __blk_end_request_all(req, -EIO);
} else
- viocd_end_request(req, 0);
+ __blk_end_request_all(req, 0);

/* restart handling of incoming requests */
spin_unlock_irqrestore(&viocd_reqlock, flags);
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index f825d50..8e6705a 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -276,11 +276,8 @@ static void cdrom_end_request(ide_drive_t *drive, int uptodate)
if (ide_end_dequeued_request(drive, failed, 0,
failed->hard_nr_sectors))
BUG();
- } else {
- if (blk_end_request(failed, -EIO,
- failed->data_len))
- BUG();
- }
+ } else
+ blk_end_request_all(failed, -EIO);
} else
cdrom_analyze_sense_data(drive, NULL, sense);
}
@@ -950,14 +947,9 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)

end_request:
if (blk_pc_request(rq)) {
- unsigned int dlen = rq->data_len;
-
if (dma)
rq->data_len = 0;
-
- if (blk_end_request(rq, 0, dlen))
- BUG();
-
+ blk_end_request_all(rq, 0);
hwif->rq = NULL;
} else {
if (!uptodate)
diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c
index a9a6c20..82f46dd 100644
--- a/drivers/ide/ide-io.c
+++ b/drivers/ide/ide-io.c
@@ -190,9 +190,7 @@ void ide_end_drive_cmd (ide_drive_t *drive, u8 stat, u8 err)

rq->errors = err;

- if (unlikely(blk_end_request(rq, (rq->errors ? -EIO : 0),
- blk_rq_bytes(rq))))
- BUG();
+ blk_end_request_all(rq, (rq->errors ? -EIO : 0));
}
EXPORT_SYMBOL(ide_end_drive_cmd);

diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 60538d9..d3d2d29 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -194,8 +194,7 @@ void ide_complete_pm_request(ide_drive_t *drive, struct request *rq)

drive->hwif->rq = NULL;

- if (blk_end_request(rq, 0, 0))
- BUG();
+ blk_end_request_all(rq, 0);
}

void ide_check_pm_state(ide_drive_t *drive, struct request *rq)
diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index de143de..a416346 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -826,7 +826,7 @@ static void mspro_block_submit_req(struct request_queue *q)

if (msb->eject) {
while ((req = elv_next_request(q)) != NULL)
- __blk_end_request(req, -ENODEV, blk_rq_bytes(req));
+ __blk_end_request_all(req, -ENODEV);

return;
}
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 08c23a9..bc172ee 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -1597,15 +1597,6 @@ void dasd_block_clear_timer(struct dasd_block *block)
}

/*
- * posts the buffer_cache about a finalized request
- */
-static inline void dasd_end_request(struct request *req, int error)
-{
- if (__blk_end_request(req, error, blk_rq_bytes(req)))
- BUG();
-}
-
-/*
* Process finished error recovery ccw.
*/
static inline void __dasd_block_process_erp(struct dasd_block *block,
@@ -1659,7 +1650,7 @@ static void __dasd_process_request_queue(struct dasd_block *block)
"Rejecting write request %p",
req);
blkdev_dequeue_request(req);
- dasd_end_request(req, -EIO);
+ __blk_end_request_all(req, -EIO);
continue;
}
cqr = basedev->discipline->build_cp(basedev, block, req);
@@ -1688,7 +1679,7 @@ static void __dasd_process_request_queue(struct dasd_block *block)
"on request %p",
PTR_ERR(cqr), req);
blkdev_dequeue_request(req);
- dasd_end_request(req, -EIO);
+ __blk_end_request_all(req, -EIO);
continue;
}
/*
@@ -1714,7 +1705,7 @@ static void __dasd_cleanup_cqr(struct dasd_ccw_req *cqr)
status = cqr->block->base->discipline->free_cp(cqr, req);
if (status <= 0)
error = status ? status : -EIO;
- dasd_end_request(req, error);
+ __blk_end_request_all(req, error);
}

/*
@@ -2020,7 +2011,7 @@ static void dasd_flush_request_queue(struct dasd_block *block)
spin_lock_irq(&block->request_queue_lock);
while ((req = elv_next_request(block->request_queue))) {
blkdev_dequeue_request(req);
- dasd_end_request(req, -EIO);
+ __blk_end_request_all(req, -EIO);
}
spin_unlock_irq(&block->request_queue_lock);
}
diff --git a/drivers/s390/char/tape_block.c b/drivers/s390/char/tape_block.c
index ae18baf..2736291 100644
--- a/drivers/s390/char/tape_block.c
+++ b/drivers/s390/char/tape_block.c
@@ -74,13 +74,6 @@ tapeblock_trigger_requeue(struct tape_device *device)
* Post finished request.
*/
static void
-tapeblock_end_request(struct request *req, int error)
-{
- if (blk_end_request(req, error, blk_rq_bytes(req)))
- BUG();
-}
-
-static void
__tapeblock_end_request(struct tape_request *ccw_req, void *data)
{
struct tape_device *device;
@@ -90,7 +83,7 @@ __tapeblock_end_request(struct tape_request *ccw_req, void *data)

device = ccw_req->device;
req = (struct request *) data;
- tapeblock_end_request(req, (ccw_req->rc == 0) ? 0 : -EIO);
+ blk_end_request_all(req, (ccw_req->rc == 0) ? 0 : -EIO);
if (ccw_req->rc == 0)
/* Update position. */
device->blk_data.block_position =
@@ -118,7 +111,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
ccw_req = device->discipline->bread(device, req);
if (IS_ERR(ccw_req)) {
DBF_EVENT(1, "TBLOCK: bread failed\n");
- tapeblock_end_request(req, -EIO);
+ blk_end_request_all(req, -EIO);
return PTR_ERR(ccw_req);
}
ccw_req->callback = __tapeblock_end_request;
@@ -131,7 +124,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
* Start/enqueueing failed. No retries in
* this case.
*/
- tapeblock_end_request(req, -EIO);
+ blk_end_request_all(req, -EIO);
device->discipline->free_bread(ccw_req);
}

@@ -177,7 +170,7 @@ tapeblock_requeue(struct work_struct *work) {
DBF_EVENT(1, "TBLOCK: Rejecting write request\n");
blkdev_dequeue_request(req);
spin_unlock_irq(&device->blk_data.request_queue_lock);
- tapeblock_end_request(req, -EIO);
+ blk_end_request_all(req, -EIO);
spin_lock_irq(&device->blk_data.request_queue_lock);
continue;
}
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index b82ffd9..a4e84c6 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1097,7 +1097,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
if (driver_byte(result) & DRIVER_SENSE)
scsi_print_sense("", cmd);
}
- blk_end_request(req, -EIO, blk_rq_bytes(req));
+ blk_end_request_all(req, -EIO);
scsi_next_command(cmd);
break;
case ACTION_REPREP:
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cb2f9ae..6ba7dbf 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -860,6 +860,22 @@ static inline bool blk_end_request(struct request *rq, int error,
}

/**
+ * blk_end_request_all - Helper function for drives to finish the request.
+ * @rq: the request to finish
+ * @err: %0 for success, < %0 for error
+ *
+ * Description:
+ * Completely finish @rq.
+ */
+static inline void blk_end_request_all(struct request *rq, int error)
+{
+ bool pending;
+
+ pending = blk_end_request(rq, error, blk_rq_bytes(rq));
+ BUG_ON(pending);
+}
+
+/**
* __blk_end_request - Helper function for drivers to complete the request.
* @rq: the request being processed
* @error: %0 for success, < %0 for error
@@ -879,6 +895,22 @@ static inline bool __blk_end_request(struct request *rq, int error,
}

/**
+ * __blk_end_request_all - Helper function for drives to finish the request.
+ * @rq: the request to finish
+ * @err: %0 for success, < %0 for error
+ *
+ * Description:
+ * Completely finish @rq. Must be called with queue lock held.
+ */
+static inline void __blk_end_request_all(struct request *rq, int error)
+{
+ bool pending;
+
+ pending = __blk_end_request(rq, error, blk_rq_bytes(rq));
+ BUG_ON(pending);
+}
+
+/**
* blk_end_bidi_request - Helper function for drivers to complete bidi request.
* @rq: the bidi request being processed
* @error: %0 for success, < %0 for error
--
1.6.0.2

2009-03-13 05:08:06

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 14/14] block: clean up unnecessary stuff from block drivers

Impact: cleanup

rq_data_dir() can only be READ or WRITE and rq->sector and nr_sectors
are always automatically updated after partial request completion.
Don't worry about rq_data_dir() not being either READ or WRITE or
manually update sector and nr_sectors.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Jörg Dorchain <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
---
drivers/block/amiflop.c | 7 -------
drivers/block/ataflop.c | 4 ----
drivers/block/xd.c | 9 ++-------
3 files changed, 2 insertions(+), 18 deletions(-)

diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 163750e..72ee010 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1371,11 +1371,6 @@ static void redo_fd_request(void)
"0x%08lx\n", track, sector, data);
#endif

- if ((rq_data_dir(CURRENT) != READ) && (rq_data_dir(CURRENT) != WRITE)) {
- printk(KERN_WARNING "do_fd_request: unknown command\n");
- __blk_end_request_all(CURRENT, -EIO);
- goto repeat;
- }
if (get_track(drive, track) == -1) {
__blk_end_request_all(CURRENT, -EIO);
goto repeat;
@@ -1407,8 +1402,6 @@ static void redo_fd_request(void)
break;
}
}
- CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
- CURRENT->sector += CURRENT->current_nr_sectors;

__blk_end_request_all(CURRENT, 0);
goto repeat;
diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index c9844f0..d19c9d6 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -732,8 +732,6 @@ static void do_fd_action( int drive )
}
else {
/* all sectors finished */
- CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
- CURRENT->sector += CURRENT->current_nr_sectors;
__blk_end_request_all(CURRENT, 0);
redo_fd_request();
return;
@@ -1139,8 +1137,6 @@ static void fd_rwsec_done1(int status)
}
else {
/* all sectors finished */
- CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
- CURRENT->sector += CURRENT->current_nr_sectors;
__blk_end_request_all(CURRENT, 0);
redo_fd_request();
}
diff --git a/drivers/block/xd.c b/drivers/block/xd.c
index 291ddc3..85d8ef3 100644
--- a/drivers/block/xd.c
+++ b/drivers/block/xd.c
@@ -308,7 +308,6 @@ static void do_xd_request (struct request_queue * q)
while ((req = elv_next_request(q)) != NULL) {
unsigned block = req->sector;
unsigned count = req->nr_sectors;
- int rw = rq_data_dir(req);
XD_INFO *disk = req->rq_disk->private_data;
int res = 0;
int retry;
@@ -321,13 +320,9 @@ static void do_xd_request (struct request_queue * q)
__blk_end_request_all(req, -EIO);
continue;
}
- if (rw != READ && rw != WRITE) {
- printk("do_xd_request: unknown request\n");
- __blk_end_request_all(req, -EIO);
- continue;
- }
for (retry = 0; (retry < XD_RETRIES) && !res; retry++)
- res = xd_readwrite(rw, disk, req->buffer, block, count);
+ res = xd_readwrite(rq_data_dir(req), disk, req->buffer,
+ block, count);
/* wrap up, 0 = success, -errno = fail */
__blk_end_request_all(req, res);
}
--
1.6.0.2

2009-03-13 05:09:59

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 12/14] block: kill end_request()

Impact: kill obsolete interface function

end_request() has been kept around for backward compatibility;
however, it seems to be about time for it to go away.

* There aren't too many users left.

* Its use of @updtodate is pretty confusing.

* In some cases, newer code ends up using mixture of end_request() and
[__]blk_end_request[_all](), which is way too confusing.

So, kill it.

Most conversions are straightforward. Noteworthy ones are...

* paride/pcd: next_request() updated to take 0/-errno instead of 1/0.

* paride/pf: pf_end_request() and next_request() updated to take
0/-errno instead of 1/0.

* xd: xd_readwrite() updated to return 0/-errno instead of 1/0.

* mtd/mtd_blkdevs: blktrans_discard_request() updated to return
0/-errno instead of 1/0. Unnecessary local variable res
initialization removed from mtd_blktrans_thread().

Signed-off-by: Tejun Heo <[email protected]>
Cc: Jörg Dorchain <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Tim Waugh <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Grant Likely <[email protected]>
Cc: Markus Lidel <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Pete Zaitcev <[email protected]>
---
drivers/block/amiflop.c | 10 +++++-----
drivers/block/ataflop.c | 14 +++++++-------
drivers/block/hd.c | 14 +++++++-------
drivers/block/paride/pcd.c | 12 ++++++------
drivers/block/paride/pd.c | 5 +++--
drivers/block/paride/pf.c | 28 ++++++++++++++--------------
drivers/block/ps3disk.c | 6 +++---
drivers/block/swim3.c | 26 +++++++++++++-------------
drivers/block/xd.c | 15 ++++++++-------
drivers/block/xen-blkfront.c | 2 +-
drivers/block/xsysace.c | 4 ++--
drivers/block/z2ram.c | 4 ++--
drivers/cdrom/gdrom.c | 6 +++---
drivers/message/i2o/i2o_block.c | 2 +-
drivers/mtd/mtd_blkdevs.c | 22 +++++++++++-----------
drivers/sbus/char/jsflash.c | 8 ++++----
include/linux/blkdev.h | 31 ++-----------------------------
17 files changed, 92 insertions(+), 117 deletions(-)

diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 8df436f..163750e 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1359,7 +1359,7 @@ static void redo_fd_request(void)
#endif
block = CURRENT->sector + cnt;
if ((int)block > floppy->blocks) {
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}

@@ -1373,11 +1373,11 @@ static void redo_fd_request(void)

if ((rq_data_dir(CURRENT) != READ) && (rq_data_dir(CURRENT) != WRITE)) {
printk(KERN_WARNING "do_fd_request: unknown command\n");
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}
if (get_track(drive, track) == -1) {
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}

@@ -1391,7 +1391,7 @@ static void redo_fd_request(void)

/* keep the drive spinning while writes are scheduled */
if (!fd_motor_on(drive)) {
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}
/*
@@ -1410,7 +1410,7 @@ static void redo_fd_request(void)
CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
CURRENT->sector += CURRENT->current_nr_sectors;

- end_request(CURRENT, 1);
+ __blk_end_request_all(CURRENT, 0);
goto repeat;
}

diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index 4234c11..c9844f0 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -612,7 +612,7 @@ static void fd_error( void )
CURRENT->errors++;
if (CURRENT->errors >= MAX_ERRORS) {
printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive );
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
}
else if (CURRENT->errors == RECALIBRATE_ERRORS) {
printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive );
@@ -734,7 +734,7 @@ static void do_fd_action( int drive )
/* all sectors finished */
CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
CURRENT->sector += CURRENT->current_nr_sectors;
- end_request(CURRENT, 1);
+ __blk_end_request_all(CURRENT, 0);
redo_fd_request();
return;
}
@@ -1141,7 +1141,7 @@ static void fd_rwsec_done1(int status)
/* all sectors finished */
CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
CURRENT->sector += CURRENT->current_nr_sectors;
- end_request(CURRENT, 1);
+ __blk_end_request_all(CURRENT, 0);
redo_fd_request();
}
return;
@@ -1414,7 +1414,7 @@ repeat:
if (!UD.connected) {
/* drive not connected */
printk(KERN_ERR "Unknown Device: fd%d\n", drive );
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}

@@ -1430,12 +1430,12 @@ repeat:
/* user supplied disk type */
if (--type >= NUM_DISK_MINORS) {
printk(KERN_WARNING "fd%d: invalid disk format", drive );
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}
if (minor2disktype[type].drive_types > DriveType) {
printk(KERN_WARNING "fd%d: unsupported disk format", drive );
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}
type = minor2disktype[type].index;
@@ -1445,7 +1445,7 @@ repeat:
}

if (CURRENT->sector + 1 > UDT->blocks) {
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
goto repeat;
}

diff --git a/drivers/block/hd.c b/drivers/block/hd.c
index 482c0c4..3fc066f 100644
--- a/drivers/block/hd.c
+++ b/drivers/block/hd.c
@@ -408,7 +408,7 @@ static void bad_rw_intr(void)
if (req != NULL) {
struct hd_i_struct *disk = req->rq_disk->private_data;
if (++req->errors >= MAX_ERRORS || (hd_error & BBD_ERR)) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
disk->special_op = disk->recalibrate = 1;
} else if (req->errors % RESET_FREQ == 0)
reset = 1;
@@ -464,7 +464,7 @@ ok_to_read:
req->buffer+512);
#endif
if (req->current_nr_sectors <= 0)
- end_request(req, 1);
+ __blk_end_request_all(req, 0);
if (i > 0) {
SET_HANDLER(&read_intr);
return;
@@ -503,7 +503,7 @@ ok_to_write:
--req->current_nr_sectors;
req->buffer += 512;
if (!i || (req->bio && req->current_nr_sectors <= 0))
- end_request(req, 1);
+ __blk_end_request_all(req, 0);
if (i > 0) {
SET_HANDLER(&write_intr);
outsw(HD_DATA, req->buffer, 256);
@@ -548,7 +548,7 @@ static void hd_times_out(unsigned long dummy)
#ifdef DEBUG
printk("%s: too many errors\n", name);
#endif
- end_request(CURRENT, 0);
+ __blk_end_request_all(CURRENT, -EIO);
}
local_irq_disable();
hd_request();
@@ -564,7 +564,7 @@ static int do_special_op(struct hd_i_struct *disk, struct request *req)
}
if (disk->head > 16) {
printk("%s: cannot handle device with more than 16 heads - giving up\n", req->rq_disk->disk_name);
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
}
disk->special_op = 0;
return 1;
@@ -610,7 +610,7 @@ repeat:
((block+nsect) > get_capacity(req->rq_disk))) {
printk("%s: bad access: block=%d, count=%d\n",
req->rq_disk->disk_name, block, nsect);
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
goto repeat;
}

@@ -650,7 +650,7 @@ repeat:
break;
default:
printk("unknown hd-command\n");
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
break;
}
}
diff --git a/drivers/block/paride/pcd.c b/drivers/block/paride/pcd.c
index e91d4b4..0ee886c 100644
--- a/drivers/block/paride/pcd.c
+++ b/drivers/block/paride/pcd.c
@@ -735,16 +735,16 @@ static void do_pcd_request(struct request_queue * q)
ps_set_intr(do_pcd_read, NULL, 0, nice);
return;
} else
- end_request(pcd_req, 0);
+ __blk_end_request_all(pcd_req, -EIO);
}
}

-static inline void next_request(int success)
+static inline void next_request(int err)
{
unsigned long saved_flags;

spin_lock_irqsave(&pcd_lock, saved_flags);
- end_request(pcd_req, success);
+ __blk_end_request_all(pcd_req, err);
pcd_busy = 0;
do_pcd_request(pcd_queue);
spin_unlock_irqrestore(&pcd_lock, saved_flags);
@@ -781,7 +781,7 @@ static void pcd_start(void)

if (pcd_command(pcd_current, rd_cmd, 2048, "read block")) {
pcd_bufblk = -1;
- next_request(0);
+ next_request(-EIO);
return;
}

@@ -796,7 +796,7 @@ static void do_pcd_read(void)
pcd_retries = 0;
pcd_transfer();
if (!pcd_count) {
- next_request(1);
+ next_request(0);
return;
}

@@ -815,7 +815,7 @@ static void do_pcd_read_drq(void)
return;
}
pcd_bufblk = -1;
- next_request(0);
+ next_request(-EIO);
return;
}

diff --git a/drivers/block/paride/pd.c b/drivers/block/paride/pd.c
index 9299455..1b8f001 100644
--- a/drivers/block/paride/pd.c
+++ b/drivers/block/paride/pd.c
@@ -410,7 +410,8 @@ static void run_fsm(void)
pd_claimed = 0;
phase = NULL;
spin_lock_irqsave(&pd_lock, saved_flags);
- end_request(pd_req, res);
+ __blk_end_request_all(pd_req,
+ res == Ok ? 0 : -EIO);
pd_req = elv_next_request(pd_queue);
if (!pd_req)
stop = 1;
@@ -477,7 +478,7 @@ static int pd_next_buf(void)
if (pd_count)
return 0;
spin_lock_irqsave(&pd_lock, saved_flags);
- end_request(pd_req, 1);
+ __blk_end_request_all(pd_req, 0);
pd_count = pd_req->current_nr_sectors;
pd_buf = pd_req->buffer;
spin_unlock_irqrestore(&pd_lock, saved_flags);
diff --git a/drivers/block/paride/pf.c b/drivers/block/paride/pf.c
index bef3b99..bb51218 100644
--- a/drivers/block/paride/pf.c
+++ b/drivers/block/paride/pf.c
@@ -750,10 +750,10 @@ static int pf_ready(void)

static struct request_queue *pf_queue;

-static void pf_end_request(int uptodate)
+static void pf_end_request(int err)
{
if (pf_req) {
- end_request(pf_req, uptodate);
+ __blk_end_request_all(pf_req, err);
pf_req = NULL;
}
}
@@ -773,7 +773,7 @@ repeat:
pf_count = pf_req->current_nr_sectors;

if (pf_block + pf_count > get_capacity(pf_req->rq_disk)) {
- pf_end_request(0);
+ pf_end_request(-EIO);
goto repeat;
}

@@ -788,7 +788,7 @@ repeat:
pi_do_claimed(pf_current->pi, do_pf_write);
else {
pf_busy = 0;
- pf_end_request(0);
+ pf_end_request(-EIO);
goto repeat;
}
}
@@ -805,7 +805,7 @@ static int pf_next_buf(void)
return 1;
if (!pf_count) {
spin_lock_irqsave(&pf_spin_lock, saved_flags);
- pf_end_request(1);
+ pf_end_request(0);
pf_req = elv_next_request(pf_queue);
spin_unlock_irqrestore(&pf_spin_lock, saved_flags);
if (!pf_req)
@@ -816,12 +816,12 @@ static int pf_next_buf(void)
return 0;
}

-static inline void next_request(int success)
+static inline void next_request(int err)
{
unsigned long saved_flags;

spin_lock_irqsave(&pf_spin_lock, saved_flags);
- pf_end_request(success);
+ pf_end_request(err);
pf_busy = 0;
do_pf_request(pf_queue);
spin_unlock_irqrestore(&pf_spin_lock, saved_flags);
@@ -844,7 +844,7 @@ static void do_pf_read_start(void)
pi_do_claimed(pf_current->pi, do_pf_read_start);
return;
}
- next_request(0);
+ next_request(-EIO);
return;
}
pf_mask = STAT_DRQ;
@@ -863,7 +863,7 @@ static void do_pf_read_drq(void)
pi_do_claimed(pf_current->pi, do_pf_read_start);
return;
}
- next_request(0);
+ next_request(-EIO);
return;
}
pi_read_block(pf_current->pi, pf_buf, 512);
@@ -871,7 +871,7 @@ static void do_pf_read_drq(void)
break;
}
pi_disconnect(pf_current->pi);
- next_request(1);
+ next_request(0);
}

static void do_pf_write(void)
@@ -890,7 +890,7 @@ static void do_pf_write_start(void)
pi_do_claimed(pf_current->pi, do_pf_write_start);
return;
}
- next_request(0);
+ next_request(-EIO);
return;
}

@@ -903,7 +903,7 @@ static void do_pf_write_start(void)
pi_do_claimed(pf_current->pi, do_pf_write_start);
return;
}
- next_request(0);
+ next_request(-EIO);
return;
}
pi_write_block(pf_current->pi, pf_buf, 512);
@@ -923,11 +923,11 @@ static void do_pf_write_done(void)
pi_do_claimed(pf_current->pi, do_pf_write_start);
return;
}
- next_request(0);
+ next_request(-EIO);
return;
}
pi_disconnect(pf_current->pi);
- next_request(1);
+ next_request(0);
}

static int __init pf_init(void)
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index bccc42b..896d0d1 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -158,7 +158,7 @@ static int ps3disk_submit_request_sg(struct ps3_storage_device *dev,
if (res) {
dev_err(&dev->sbd.core, "%s:%u: %s failed %d\n", __func__,
__LINE__, op, res);
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
return 0;
}

@@ -180,7 +180,7 @@ static int ps3disk_submit_flush_request(struct ps3_storage_device *dev,
if (res) {
dev_err(&dev->sbd.core, "%s:%u: sync cache failed 0x%llx\n",
__func__, __LINE__, res);
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
return 0;
}

@@ -205,7 +205,7 @@ static void ps3disk_do_request(struct ps3_storage_device *dev,
break;
} else {
blk_dump_rq_flags(req, DEVICE_NAME " bad request");
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
}
diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
index 6129653..f661057 100644
--- a/drivers/block/swim3.c
+++ b/drivers/block/swim3.c
@@ -320,15 +320,15 @@ static void start_request(struct floppy_state *fs)
#endif

if (req->sector < 0 || req->sector >= fs->total_secs) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
if (req->current_nr_sectors == 0) {
- end_request(req, 1);
+ __blk_end_request_all(req, 0);
continue;
}
if (fs->ejected) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}

@@ -336,7 +336,7 @@ static void start_request(struct floppy_state *fs)
if (fs->write_prot < 0)
fs->write_prot = swim3_readbit(fs, WRITE_PROT);
if (fs->write_prot) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
}
@@ -508,7 +508,7 @@ static void act(struct floppy_state *fs)
case do_transfer:
if (fs->cur_cyl != fs->req_cyl) {
if (fs->retries > 5) {
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
return;
}
@@ -540,7 +540,7 @@ static void scan_timeout(unsigned long data)
out_8(&sw->intr_enable, 0);
fs->cur_cyl = -1;
if (fs->retries > 5) {
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
start_request(fs);
} else {
@@ -559,7 +559,7 @@ static void seek_timeout(unsigned long data)
out_8(&sw->select, RELAX);
out_8(&sw->intr_enable, 0);
printk(KERN_ERR "swim3: seek timeout\n");
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
start_request(fs);
}
@@ -583,7 +583,7 @@ static void settle_timeout(unsigned long data)
return;
}
printk(KERN_ERR "swim3: seek settle timeout\n");
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
start_request(fs);
}
@@ -615,7 +615,7 @@ static void xfer_timeout(unsigned long data)
fd_req->current_nr_sectors -= s;
printk(KERN_ERR "swim3: timeout %sing sector %ld\n",
(rq_data_dir(fd_req)==WRITE? "writ": "read"), (long)fd_req->sector);
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
start_request(fs);
}
@@ -646,7 +646,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
printk(KERN_ERR "swim3: seen sector but cyl=ff?\n");
fs->cur_cyl = -1;
if (fs->retries > 5) {
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
start_request(fs);
} else {
@@ -731,7 +731,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
printk("swim3: error %sing block %ld (err=%x)\n",
rq_data_dir(fd_req) == WRITE? "writ": "read",
(long)fd_req->sector, err);
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
}
} else {
@@ -740,7 +740,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
printk(KERN_ERR "swim3: fd dma: stat=%x resid=%d\n", stat, resid);
printk(KERN_ERR " state=%d, dir=%x, intr=%x, err=%x\n",
fs->state, rq_data_dir(fd_req), intr, err);
- end_request(fd_req, 0);
+ __blk_end_request_all(fd_req, -EIO);
fs->state = idle;
start_request(fs);
break;
@@ -749,7 +749,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
fd_req->current_nr_sectors -= fs->scount;
fd_req->buffer += fs->scount * 512;
if (fd_req->current_nr_sectors <= 0) {
- end_request(fd_req, 1);
+ __blk_end_request_all(fd_req, 0);
fs->state = idle;
} else {
fs->req_sector += fs->scount;
diff --git a/drivers/block/xd.c b/drivers/block/xd.c
index 64b496f..291ddc3 100644
--- a/drivers/block/xd.c
+++ b/drivers/block/xd.c
@@ -314,21 +314,22 @@ static void do_xd_request (struct request_queue * q)
int retry;

if (!blk_fs_request(req)) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
if (block + count > get_capacity(req->rq_disk)) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
if (rw != READ && rw != WRITE) {
printk("do_xd_request: unknown request\n");
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
for (retry = 0; (retry < XD_RETRIES) && !res; retry++)
res = xd_readwrite(rw, disk, req->buffer, block, count);
- end_request(req, res); /* wrap up, 0 = fail, 1 = success */
+ /* wrap up, 0 = success, -errno = fail */
+ __blk_end_request_all(req, res);
}
}

@@ -418,7 +419,7 @@ static int xd_readwrite (u_char operation,XD_INFO *p,char *buffer,u_int block,u_
printk("xd%c: %s timeout, recalibrating drive\n",'a'+drive,(operation == READ ? "read" : "write"));
xd_recalibrate(drive);
spin_lock_irq(&xd_lock);
- return (0);
+ return -EIO;
case 2:
if (sense[0] & 0x30) {
printk("xd%c: %s - ",'a'+drive,(operation == READ ? "reading" : "writing"));
@@ -439,7 +440,7 @@ static int xd_readwrite (u_char operation,XD_INFO *p,char *buffer,u_int block,u_
else
printk(" - no valid disk address\n");
spin_lock_irq(&xd_lock);
- return (0);
+ return -EIO;
}
if (xd_dma_buffer)
for (i=0; i < (temp * 0x200); i++)
@@ -448,7 +449,7 @@ static int xd_readwrite (u_char operation,XD_INFO *p,char *buffer,u_int block,u_
count -= temp, buffer += temp * 0x200, block += temp;
}
spin_lock_irq(&xd_lock);
- return (1);
+ return 0;
}

/* xd_recalibrate: recalibrate a given drive and reset controller if necessary */
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index cd6cfe3..01efaaa 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -302,7 +302,7 @@ static void do_blkif_request(struct request_queue *rq)
while ((req = elv_next_request(rq)) != NULL) {
info = req->rq_disk->private_data;
if (!blk_fs_request(req)) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}

diff --git a/drivers/block/xsysace.c b/drivers/block/xsysace.c
index 119be34..d6a3da9 100644
--- a/drivers/block/xsysace.c
+++ b/drivers/block/xsysace.c
@@ -472,7 +472,7 @@ struct request *ace_get_next_request(struct request_queue * q)
while ((req = elv_next_request(q)) != NULL) {
if (blk_fs_request(req))
break;
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
}
return req;
}
@@ -500,7 +500,7 @@ static void ace_fsm_dostate(struct ace_device *ace)

/* Drop all pending requests */
while ((req = elv_next_request(ace->queue)) != NULL)
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);

/* Drop back to IDLE state and notify waiters */
ace->fsm_state = ACE_FSM_STATE_IDLE;
diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c
index 80754cd..4172f2c 100644
--- a/drivers/block/z2ram.c
+++ b/drivers/block/z2ram.c
@@ -77,7 +77,7 @@ static void do_z2_request(struct request_queue *q)
if (start + len > z2ram_size) {
printk( KERN_ERR DEVICE_NAME ": bad access: block=%lu, count=%u\n",
req->sector, req->current_nr_sectors);
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}
while (len) {
@@ -93,7 +93,7 @@ static void do_z2_request(struct request_queue *q)
start += size;
len -= size;
}
- end_request(req, 1);
+ __blk_end_request_all(req, 0);
}
}

diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index fee9a9e..c782778 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -654,17 +654,17 @@ static void gdrom_request(struct request_queue *rq)
while ((req = elv_next_request(rq)) != NULL) {
if (!blk_fs_request(req)) {
printk(KERN_DEBUG "GDROM: Non-fs request ignored\n");
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
}
if (rq_data_dir(req) != READ) {
printk(KERN_NOTICE "GDROM: Read only device -");
printk(" write request ignored\n");
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
}
if (req->nr_sectors)
gdrom_request_handler_dma(req);
else
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
}
}

diff --git a/drivers/message/i2o/i2o_block.c b/drivers/message/i2o/i2o_block.c
index a443e13..3b03eef 100644
--- a/drivers/message/i2o/i2o_block.c
+++ b/drivers/message/i2o/i2o_block.c
@@ -923,7 +923,7 @@ static void i2o_block_request_fn(struct request_queue *q)
break;
}
} else
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
}
};

diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 1409f01..461b4a8 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -54,33 +54,33 @@ static int do_blktrans_request(struct mtd_blktrans_ops *tr,

if (req->cmd_type == REQ_TYPE_LINUX_BLOCK &&
req->cmd[0] == REQ_LB_OP_DISCARD)
- return !tr->discard(dev, block, nsect);
+ return tr->discard(dev, block, nsect);

if (!blk_fs_request(req))
- return 0;
+ return -EIO;

if (req->sector + req->current_nr_sectors > get_capacity(req->rq_disk))
- return 0;
+ return -EIO;

switch(rq_data_dir(req)) {
case READ:
for (; nsect > 0; nsect--, block++, buf += tr->blksize)
if (tr->readsect(dev, block, buf))
- return 0;
- return 1;
+ return -EIO;
+ return 0;

case WRITE:
if (!tr->writesect)
- return 0;
+ return -EIO;

for (; nsect > 0; nsect--, block++, buf += tr->blksize)
if (tr->writesect(dev, block, buf))
- return 0;
- return 1;
+ return -EIO;
+ return 0;

default:
printk(KERN_NOTICE "Unknown request %u\n", rq_data_dir(req));
- return 0;
+ return -EIO;
}
}

@@ -96,7 +96,7 @@ static int mtd_blktrans_thread(void *arg)
while (!kthread_should_stop()) {
struct request *req;
struct mtd_blktrans_dev *dev;
- int res = 0;
+ int res;

req = elv_next_request(rq);

@@ -119,7 +119,7 @@ static int mtd_blktrans_thread(void *arg)

spin_lock_irq(rq->queue_lock);

- end_request(req, res);
+ __blk_end_request_all(req, res);
}
spin_unlock_irq(rq->queue_lock);

diff --git a/drivers/sbus/char/jsflash.c b/drivers/sbus/char/jsflash.c
index a9a9893..9ef95af 100644
--- a/drivers/sbus/char/jsflash.c
+++ b/drivers/sbus/char/jsflash.c
@@ -195,25 +195,25 @@ static void jsfd_do_request(struct request_queue *q)
size_t len = req->current_nr_sectors << 9;

if ((offset + len) > jdp->dsize) {
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}

if (rq_data_dir(req) != READ) {
printk(KERN_ERR "jsfd: write\n");
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}

if ((jdp->dbase & 0xff000000) != 0x20000000) {
printk(KERN_ERR "jsfd: bad base %x\n", (int)jdp->dbase);
- end_request(req, 0);
+ __blk_end_request_all(req, -EIO);
continue;
}

jsfd_read(req->buffer, jdp->dbase + offset, len);

- end_request(req, 1);
+ __blk_end_request_all(req, 0);
}
}

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 6ba7dbf..ec3e855 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -823,9 +823,8 @@ extern unsigned int blk_rq_cur_bytes(struct request *rq);
* blk_update_request() completes given number of bytes and updates
* the request without completing it.
*
- * blk_end_request() and friends. __blk_end_request() and
- * end_request() must be called with the request queue spinlock
- * acquired.
+ * blk_end_request() and friends. __blk_end_request() must be called
+ * with the request queue spinlock acquired.
*
* Several drivers define their own end_request and call
* blk_end_request() for parts of the original function.
@@ -931,32 +930,6 @@ static inline bool blk_end_bidi_request(struct request *rq, int error,
return __blk_end_io(rq, error, nr_bytes, bidi_bytes, false);
}

-/**
- * end_request - end I/O on the current segment of the request
- * @rq: the request being processed
- * @uptodate: error value or %0/%1 uptodate flag
- *
- * Description:
- * Ends I/O on the current segment of a request. If that is the only
- * remaining segment, the request is also completed and freed.
- *
- * This is a remnant of how older block drivers handled I/O completions.
- * Modern drivers typically end I/O on the full request in one go, unless
- * they have a residual value to account for. For that case this function
- * isn't really useful, unless the residual just happens to be the
- * full current segment. In other words, don't use this function in new
- * code. Use blk_end_request() or __blk_end_request() to end a request.
- **/
-static inline void end_request(struct request *rq, int uptodate)
-{
- int error = 0;
-
- if (uptodate <= 0)
- error = uptodate ? uptodate : -EIO;
-
- __blk_end_io(rq, error, rq->hard_cur_sectors << 9, 0, true);
-}
-
extern void blk_complete_request(struct request *);
extern void __blk_complete_request(struct request *);
extern void blk_abort_request(struct request *);
--
1.6.0.2

Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Friday 13 March 2009, Tejun Heo wrote:
> Impact: cleanup
>
> There are many [__]blk_end_request() call sites which call it with
> full request length and expect full completion. Many of them ensure
> that the request actually completes by doing BUG_ON() the return
> value, which is awkward and error-prone.
>
> This patch adds [__]blk_end_request_all() which takes @rq and @error
> and fully completes the request. BUG_ON() is added to
> blk_update_request() to ensure that this actually happens.
>
> Most conversions are simple but there are a few noteworthy ones.
>
> * cdrom/viocd: viocd_end_request() replaced with direct calls to
> __blk_end_request_all().
>
> * s390/block/dasd: dasd_end_request() replaced with direct calls to
> __blk_end_request_all().
>
> * s390/char/tape_block: tapeblock_end_request() replaced with direct
> calls to blk_end_request_all().
>
> Signed-off-by: Tejun Heo <[email protected]>
> Cc: Bartlomiej Zolnierkiewicz <[email protected]>

[...]

> diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
> index f825d50..8e6705a 100644
> --- a/drivers/ide/ide-cd.c
> +++ b/drivers/ide/ide-cd.c

[...]

> @@ -950,14 +947,9 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
>
> end_request:
> if (blk_pc_request(rq)) {
> - unsigned int dlen = rq->data_len;
> -
> if (dma)
> rq->data_len = 0;

Since rq->data_len is modified here...

> -
> - if (blk_end_request(rq, 0, dlen))
> - BUG();
> -
> + blk_end_request_all(rq, 0);

...this won't fly.

[ IIRC ->data_len modification is needed for SG_IO ]

> hwif->rq = NULL;
> } else {
> if (!uptodate)
> diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c
> index a9a6c20..82f46dd 100644
> --- a/drivers/ide/ide-io.c
> +++ b/drivers/ide/ide-io.c
> @@ -190,9 +190,7 @@ void ide_end_drive_cmd (ide_drive_t *drive, u8 stat, u8 err)
>
> rq->errors = err;
>
> - if (unlikely(blk_end_request(rq, (rq->errors ? -EIO : 0),
> - blk_rq_bytes(rq))))
> - BUG();
> + blk_end_request_all(rq, (rq->errors ? -EIO : 0));

How's about dropping needless parentheses while at it?

> }
> EXPORT_SYMBOL(ide_end_drive_cmd);
>
> diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
> index 60538d9..d3d2d29 100644
> --- a/drivers/ide/ide-pm.c
> +++ b/drivers/ide/ide-pm.c
> @@ -194,8 +194,7 @@ void ide_complete_pm_request(ide_drive_t *drive, struct request *rq)
>
> drive->hwif->rq = NULL;
>
> - if (blk_end_request(rq, 0, 0))
> - BUG();
> + blk_end_request_all(rq, 0);

0 => ide_rq_bytes() _not_ blk_rq_bytes()

Please convert ide_complete_pm_request() to use blk_rq_bytes() in
the separate pre-patch first.

More generic comment follows -> this patch is guaranteed to clash with
at least linux-next/pata-2.6 tree so why not introduce block layer helpers
now, then push all driver updates through respective driver maintainers
and deal with end_request() later (after all driver updates are in-tree)?

Thanks,
Bart

2009-03-14 02:08:48

by Tejun Heo

[permalink] [raw]
Subject: Re: [GIT PATCH] block: cleanup patches

Tejun Heo wrote:
> Hello,
>
> This patchset is available in the following git tree.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
>
> This patchset contains the following 14 cleanup patches.

Jens, once you're okay with the rest of the changes, I'll remove IDE
related stuff and repost the patchset.

Thanks.

--
tejun

2009-03-14 02:08:21

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

Hello, Bartlomiej. :-)

Bartlomiej Zolnierkiewicz wrote:
>> end_request:
>> if (blk_pc_request(rq)) {
>> - unsigned int dlen = rq->data_len;
>> -
>> if (dma)
>> rq->data_len = 0;
>
> Since rq->data_len is modified here...
>
>> -
>> - if (blk_end_request(rq, 0, dlen))
>> - BUG();
>> -
>> + blk_end_request_all(rq, 0);
>
> ...this won't fly.
>
> [ IIRC ->data_len modification is needed for SG_IO ]

Thanks for catching this. I'll take further look into this.

>> @@ -190,9 +190,7 @@ void ide_end_drive_cmd (ide_drive_t *drive, u8 stat, u8 err)
>>
>> rq->errors = err;
>>
>> - if (unlikely(blk_end_request(rq, (rq->errors ? -EIO : 0),
>> - blk_rq_bytes(rq))))
>> - BUG();
>> + blk_end_request_all(rq, (rq->errors ? -EIO : 0));
>
> How's about dropping needless parentheses while at it?

Sure.

>> diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
>> index 60538d9..d3d2d29 100644
>> --- a/drivers/ide/ide-pm.c
>> +++ b/drivers/ide/ide-pm.c
>> @@ -194,8 +194,7 @@ void ide_complete_pm_request(ide_drive_t *drive, struct request *rq)
>>
>> drive->hwif->rq = NULL;
>>
>> - if (blk_end_request(rq, 0, 0))
>> - BUG();
>> + blk_end_request_all(rq, 0);
>
> 0 => ide_rq_bytes() _not_ blk_rq_bytes()

Can you elaborate a bit? Isn't the request supposed to always
completely finish here? If the request isn't zero-length, completion
with 0 byte length doesn't make any sense.

> Please convert ide_complete_pm_request() to use blk_rq_bytes() in
> the separate pre-patch first.

Alright, will do.

> More generic comment follows -> this patch is guaranteed to clash
> with at least linux-next/pata-2.6 tree so why not introduce block
> layer helpers now, then push all driver updates through respective
> driver maintainers and deal with end_request() later (after all
> driver updates are in-tree)?

Most of the lld changes being trivial, I was hoping to push things
through blk tree, but IDE seems to be the most intertwined with the
block layer and it's likely to see quite some amount of not-so-trivial
changes to subtle paths. How about pushing !IDE parts into blk tree
and pulling blk into pata-2.6, make IDE related changes there and
pulling back into blk tree so that further progresses can be made?

Thanks.

--
tejun

2009-03-14 02:10:27

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

Tejun Heo wrote:
...
>> with at least linux-next/pata-2.6 tree so why not introduce block

Bart, can you please tell me git vector for the above tree?

Thanks.

--
tejun

Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Saturday 14 March 2009, Tejun Heo wrote:
> Hello, Bartlomiej. :-)

Hi!

> Bartlomiej Zolnierkiewicz wrote:

[...]

> >> diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
> >> index 60538d9..d3d2d29 100644
> >> --- a/drivers/ide/ide-pm.c
> >> +++ b/drivers/ide/ide-pm.c
> >> @@ -194,8 +194,7 @@ void ide_complete_pm_request(ide_drive_t *drive, struct request *rq)
> >>
> >> drive->hwif->rq = NULL;
> >>
> >> - if (blk_end_request(rq, 0, 0))
> >> - BUG();
> >> + blk_end_request_all(rq, 0);
> >
> > 0 => ide_rq_bytes() _not_ blk_rq_bytes()
>
> Can you elaborate a bit? Isn't the request supposed to always
> completely finish here? If the request isn't zero-length, completion
> with 0 byte length doesn't make any sense.

Arghh, just ignore me on this one -- my brain must have already started
switching into the power saving...

[ This is blk_end_request() not ide_end_request() call so "0 == 0". ]

> > Please convert ide_complete_pm_request() to use blk_rq_bytes() in
> > the separate pre-patch first.
>
> Alright, will do.

No need to, please just put the comment about 0 => blk_rq_bytes()
conversion so people will know that this is an intended change when
reviewing the patch.

> > More generic comment follows -> this patch is guaranteed to clash
> > with at least linux-next/pata-2.6 tree so why not introduce block
> > layer helpers now, then push all driver updates through respective
> > driver maintainers and deal with end_request() later (after all
> > driver updates are in-tree)?
>
> Most of the lld changes being trivial, I was hoping to push things
> through blk tree, but IDE seems to be the most intertwined with the
> block layer and it's likely to see quite some amount of not-so-trivial
> changes to subtle paths. How about pushing !IDE parts into blk tree
> and pulling blk into pata-2.6, make IDE related changes there and
> pulling back into blk tree so that further progresses can be made?

There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
Linus' tree and it is not going to change for now (for various reasons).

Thanks,
Bart

2009-03-14 19:56:18

by James Bottomley

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sat, 2009-03-14 at 20:23 +0100, Bartlomiej Zolnierkiewicz wrote:
> > > More generic comment follows -> this patch is guaranteed to clash
> > > with at least linux-next/pata-2.6 tree so why not introduce block
> > > layer helpers now, then push all driver updates through respective
> > > driver maintainers and deal with end_request() later (after all
> > > driver updates are in-tree)?
> >
> > Most of the lld changes being trivial, I was hoping to push things
> > through blk tree, but IDE seems to be the most intertwined with the
> > block layer and it's likely to see quite some amount of not-so-trivial
> > changes to subtle paths. How about pushing !IDE parts into blk tree
> > and pulling blk into pata-2.6, make IDE related changes there and
> > pulling back into blk tree so that further progresses can be made?
>
> There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
> Linus' tree and it is not going to change for now (for various reasons).

Actually this one's easily solvable if you base the quilt on the block
tree (just specify it to linux-next in the BASE directive and it will do
the right thing).

What I'd do is actually run two quilts: one based on vanilla and one
based on block and only add block dependent patches to the latter. This
is like running a postmerge git tree (you can only send a pull request
for it after block goes in).

James

Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Saturday 14 March 2009, James Bottomley wrote:
> On Sat, 2009-03-14 at 20:23 +0100, Bartlomiej Zolnierkiewicz wrote:
> > > > More generic comment follows -> this patch is guaranteed to clash
> > > > with at least linux-next/pata-2.6 tree so why not introduce block
> > > > layer helpers now, then push all driver updates through respective
> > > > driver maintainers and deal with end_request() later (after all
> > > > driver updates are in-tree)?
> > >
> > > Most of the lld changes being trivial, I was hoping to push things
> > > through blk tree, but IDE seems to be the most intertwined with the
> > > block layer and it's likely to see quite some amount of not-so-trivial
> > > changes to subtle paths. How about pushing !IDE parts into blk tree
> > > and pulling blk into pata-2.6, make IDE related changes there and
> > > pulling back into blk tree so that further progresses can be made?
> >
> > There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
> > Linus' tree and it is not going to change for now (for various reasons).
>
> Actually this one's easily solvable if you base the quilt on the block
> tree (just specify it to linux-next in the BASE directive and it will do
> the right thing).
>
> What I'd do is actually run two quilts: one based on vanilla and one
> based on block and only add block dependent patches to the latter. This
> is like running a postmerge git tree (you can only send a pull request
> for it after block goes in).

Thanks for the hint but it sounds like a major pain once you hit some
changes touching the same code areas that block patches do...

Besides this is guaranteed to inrease the workload on my side so it
won't happen simply because of -ENOTIME.

Thanks,
Bart

2009-03-15 16:45:37

by Jens Axboe

[permalink] [raw]
Subject: Re: [GIT PATCH] block: cleanup patches

On Sat, Mar 14 2009, Tejun Heo wrote:
> Tejun Heo wrote:
> > Hello,
> >
> > This patchset is available in the following git tree.
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
> >
> > This patchset contains the following 14 cleanup patches.
>
> Jens, once you're okay with the rest of the changes, I'll remove IDE
> related stuff and repost the patchset.

I only gave it a quick glance, looks mostly OK. Though I don't like
putting elv_ prefixed stuff in blk-core. I agree it belongs there
though, it's really block layer functionality and not io scheduler
interfacing. So rename it blk_ instead and put compatability names in
the header file instead.

I'll give it a more thorough review and integration test tomorrow.

--
Jens Axboe

2009-03-15 16:48:51

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sat, Mar 14 2009, Bartlomiej Zolnierkiewicz wrote:
> On Saturday 14 March 2009, James Bottomley wrote:
> > On Sat, 2009-03-14 at 20:23 +0100, Bartlomiej Zolnierkiewicz wrote:
> > > > > More generic comment follows -> this patch is guaranteed to clash
> > > > > with at least linux-next/pata-2.6 tree so why not introduce block
> > > > > layer helpers now, then push all driver updates through respective
> > > > > driver maintainers and deal with end_request() later (after all
> > > > > driver updates are in-tree)?
> > > >
> > > > Most of the lld changes being trivial, I was hoping to push things
> > > > through blk tree, but IDE seems to be the most intertwined with the
> > > > block layer and it's likely to see quite some amount of not-so-trivial
> > > > changes to subtle paths. How about pushing !IDE parts into blk tree
> > > > and pulling blk into pata-2.6, make IDE related changes there and
> > > > pulling back into blk tree so that further progresses can be made?
> > >
> > > There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
> > > Linus' tree and it is not going to change for now (for various reasons).
> >
> > Actually this one's easily solvable if you base the quilt on the block
> > tree (just specify it to linux-next in the BASE directive and it will do
> > the right thing).
> >
> > What I'd do is actually run two quilts: one based on vanilla and one
> > based on block and only add block dependent patches to the latter. This
> > is like running a postmerge git tree (you can only send a pull request
> > for it after block goes in).
>
> Thanks for the hint but it sounds like a major pain once you hit some
> changes touching the same code areas that block patches do...
>
> Besides this is guaranteed to inrease the workload on my side so it
> won't happen simply because of -ENOTIME.

When things collide, it is more work for everyone. But such is life for
middle/core layer changes. Rebasing _really_ should not be a lot of
work. And you are going to have to do it sooner or later, either upfront
or after your patches stop applying because the block changes went
upstream.

The only sane way to handle conflicts like this is from the bottom and
up.

You could try a more helpful approach, Bart.

--
Jens Axboe

Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sunday 15 March 2009, Jens Axboe wrote:
> On Sat, Mar 14 2009, Bartlomiej Zolnierkiewicz wrote:
> > On Saturday 14 March 2009, James Bottomley wrote:
> > > On Sat, 2009-03-14 at 20:23 +0100, Bartlomiej Zolnierkiewicz wrote:
> > > > > > More generic comment follows -> this patch is guaranteed to clash
> > > > > > with at least linux-next/pata-2.6 tree so why not introduce block
> > > > > > layer helpers now, then push all driver updates through respective
> > > > > > driver maintainers and deal with end_request() later (after all
> > > > > > driver updates are in-tree)?
> > > > >
> > > > > Most of the lld changes being trivial, I was hoping to push things
> > > > > through blk tree, but IDE seems to be the most intertwined with the
> > > > > block layer and it's likely to see quite some amount of not-so-trivial
> > > > > changes to subtle paths. How about pushing !IDE parts into blk tree
> > > > > and pulling blk into pata-2.6, make IDE related changes there and
> > > > > pulling back into blk tree so that further progresses can be made?
> > > >
> > > > There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
> > > > Linus' tree and it is not going to change for now (for various reasons).
> > >
> > > Actually this one's easily solvable if you base the quilt on the block
> > > tree (just specify it to linux-next in the BASE directive and it will do
> > > the right thing).
> > >
> > > What I'd do is actually run two quilts: one based on vanilla and one
> > > based on block and only add block dependent patches to the latter. This
> > > is like running a postmerge git tree (you can only send a pull request
> > > for it after block goes in).
> >
> > Thanks for the hint but it sounds like a major pain once you hit some
> > changes touching the same code areas that block patches do...
> >
> > Besides this is guaranteed to inrease the workload on my side so it
> > won't happen simply because of -ENOTIME.
>
> When things collide, it is more work for everyone. But such is life for
> middle/core layer changes. Rebasing _really_ should not be a lot of
> work. And you are going to have to do it sooner or later, either upfront
> or after your patches stop applying because the block changes went
> upstream.

The task of running the secondary tree is not merely rebasing of patches
(which I already do on a daily basis) as it also involves extra coordination,
testing, updates etc.

Really, no more IDE workload on my side is possible and this is a fact not
something to be discussed about (unless someone is willing to help with IDE
maintainance tasks or sponsor my kernel work).

> The only sane way to handle conflicts like this is from the bottom and
> up.
>
> You could try a more helpful approach, Bart.

Well, see my initial reply. I proposed the middle-point approach which would
spread an extra effort across all parties involved and should also result in
a better review/testing of changes...

Thanks,
Bart

2009-03-15 18:40:04

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sun, Mar 15 2009, Bartlomiej Zolnierkiewicz wrote:
> On Sunday 15 March 2009, Jens Axboe wrote:
> > On Sat, Mar 14 2009, Bartlomiej Zolnierkiewicz wrote:
> > > On Saturday 14 March 2009, James Bottomley wrote:
> > > > On Sat, 2009-03-14 at 20:23 +0100, Bartlomiej Zolnierkiewicz wrote:
> > > > > > > More generic comment follows -> this patch is guaranteed to clash
> > > > > > > with at least linux-next/pata-2.6 tree so why not introduce block
> > > > > > > layer helpers now, then push all driver updates through respective
> > > > > > > driver maintainers and deal with end_request() later (after all
> > > > > > > driver updates are in-tree)?
> > > > > >
> > > > > > Most of the lld changes being trivial, I was hoping to push things
> > > > > > through blk tree, but IDE seems to be the most intertwined with the
> > > > > > block layer and it's likely to see quite some amount of not-so-trivial
> > > > > > changes to subtle paths. How about pushing !IDE parts into blk tree
> > > > > > and pulling blk into pata-2.6, make IDE related changes there and
> > > > > > pulling back into blk tree so that further progresses can be made?
> > > > >
> > > > > There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
> > > > > Linus' tree and it is not going to change for now (for various reasons).
> > > >
> > > > Actually this one's easily solvable if you base the quilt on the block
> > > > tree (just specify it to linux-next in the BASE directive and it will do
> > > > the right thing).
> > > >
> > > > What I'd do is actually run two quilts: one based on vanilla and one
> > > > based on block and only add block dependent patches to the latter. This
> > > > is like running a postmerge git tree (you can only send a pull request
> > > > for it after block goes in).
> > >
> > > Thanks for the hint but it sounds like a major pain once you hit some
> > > changes touching the same code areas that block patches do...
> > >
> > > Besides this is guaranteed to inrease the workload on my side so it
> > > won't happen simply because of -ENOTIME.
> >
> > When things collide, it is more work for everyone. But such is life for
> > middle/core layer changes. Rebasing _really_ should not be a lot of
> > work. And you are going to have to do it sooner or later, either upfront
> > or after your patches stop applying because the block changes went
> > upstream.
>
> The task of running the secondary tree is not merely rebasing of patches
> (which I already do on a daily basis) as it also involves extra coordination,
> testing, updates etc.

Coordination with whom? If people develop off your pata tree, then there
should be no difference.

> Really, no more IDE workload on my side is possible and this is a fact not
> something to be discussed about (unless someone is willing to help with IDE
> maintainance tasks or sponsor my kernel work).

Rate of ide/ changes is pretty high, so I'd say you are doing quite well
on that account.

> > The only sane way to handle conflicts like this is from the bottom and
> > up.
> >
> > You could try a more helpful approach, Bart.
>
> Well, see my initial reply. I proposed the middle-point approach
> which would spread an extra effort across all parties involved and
> should also result in a better review/testing of changes...

That approach makes sense for more involved changes. Honestly, all it
would do in this case is slow things down and create more work for Tejun
or me. Potentially a lot, at least a lot more than the little extra
effort it would be to rebase the pata tree.

Bart, we use this approach all the time with the SCSI branch and it has
worked fine. It's not going to change for the ide tree. It's
contradictory to fanning out work at the ends.

--
Jens Axboe

Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sunday 15 March 2009, Jens Axboe wrote:
> On Sun, Mar 15 2009, Bartlomiej Zolnierkiewicz wrote:
> > On Sunday 15 March 2009, Jens Axboe wrote:
> > > On Sat, Mar 14 2009, Bartlomiej Zolnierkiewicz wrote:
> > > > On Saturday 14 March 2009, James Bottomley wrote:
> > > > > On Sat, 2009-03-14 at 20:23 +0100, Bartlomiej Zolnierkiewicz wrote:
> > > > > > > > More generic comment follows -> this patch is guaranteed to clash
> > > > > > > > with at least linux-next/pata-2.6 tree so why not introduce block
> > > > > > > > layer helpers now, then push all driver updates through respective
> > > > > > > > driver maintainers and deal with end_request() later (after all
> > > > > > > > driver updates are in-tree)?
> > > > > > >
> > > > > > > Most of the lld changes being trivial, I was hoping to push things
> > > > > > > through blk tree, but IDE seems to be the most intertwined with the
> > > > > > > block layer and it's likely to see quite some amount of not-so-trivial
> > > > > > > changes to subtle paths. How about pushing !IDE parts into blk tree
> > > > > > > and pulling blk into pata-2.6, make IDE related changes there and
> > > > > > > pulling back into blk tree so that further progresses can be made?
> > > > > >
> > > > > > There is a "tiny" problem with this -- pata-2.6 is a quilt tree based on
> > > > > > Linus' tree and it is not going to change for now (for various reasons).
> > > > >
> > > > > Actually this one's easily solvable if you base the quilt on the block
> > > > > tree (just specify it to linux-next in the BASE directive and it will do
> > > > > the right thing).
> > > > >
> > > > > What I'd do is actually run two quilts: one based on vanilla and one
> > > > > based on block and only add block dependent patches to the latter. This
> > > > > is like running a postmerge git tree (you can only send a pull request
> > > > > for it after block goes in).
> > > >
> > > > Thanks for the hint but it sounds like a major pain once you hit some
> > > > changes touching the same code areas that block patches do...
> > > >
> > > > Besides this is guaranteed to inrease the workload on my side so it
> > > > won't happen simply because of -ENOTIME.
> > >
> > > When things collide, it is more work for everyone. But such is life for
> > > middle/core layer changes. Rebasing _really_ should not be a lot of
> > > work. And you are going to have to do it sooner or later, either upfront
> > > or after your patches stop applying because the block changes went
> > > upstream.
> >
> > The task of running the secondary tree is not merely rebasing of patches
> > (which I already do on a daily basis) as it also involves extra coordination,
> > testing, updates etc.
>
> Coordination with whom? If people develop off your pata tree, then there
> should be no difference.

Coordination between trees.

Moreover people often develop against linux-next (this is perfectly fine with
the current development model) which after change would mean that their patches
could end up being dependent also on block (more work for me to sort it out).

> > Really, no more IDE workload on my side is possible and this is a fact not
> > something to be discussed about (unless someone is willing to help with IDE
> > maintainance tasks or sponsor my kernel work).
>
> Rate of ide/ changes is pretty high, so I'd say you are doing quite well
> on that account.

The historical rate of change have very little to do with the fact that I'm
currently very time constrained.

> > > The only sane way to handle conflicts like this is from the bottom and
> > > up.
> > >
> > > You could try a more helpful approach, Bart.
> >
> > Well, see my initial reply. I proposed the middle-point approach
> > which would spread an extra effort across all parties involved and
> > should also result in a better review/testing of changes...
>
> That approach makes sense for more involved changes. Honestly, all it
> would do in this case is slow things down and create more work for Tejun
> or me. Potentially a lot, at least a lot more than the little extra
> effort it would be to rebase the pata tree.

It could be "the little extra effort" after things settle down but not for
the transition period so asking me to do it now is just plain wrong. Sorry
but we are week or so before merge window and I have to prepare for it!

> Bart, we use this approach all the time with the SCSI branch and it has
> worked fine. It's not going to change for the ide tree. It's
> contradictory to fanning out work at the ends.

I can look into separate trees (or maybe just having one tree based on block
tree like discussed before) after 2.6.30-rc1/2.

Thanks,
Bart

2009-03-15 20:48:19

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sun, Mar 15 2009, Bartlomiej Zolnierkiewicz wrote:
> > > > > Thanks for the hint but it sounds like a major pain once you hit some
> > > > > changes touching the same code areas that block patches do...
> > > > >
> > > > > Besides this is guaranteed to inrease the workload on my side so it
> > > > > won't happen simply because of -ENOTIME.
> > > >
> > > > When things collide, it is more work for everyone. But such is life for
> > > > middle/core layer changes. Rebasing _really_ should not be a lot of
> > > > work. And you are going to have to do it sooner or later, either upfront
> > > > or after your patches stop applying because the block changes went
> > > > upstream.
> > >
> > > The task of running the secondary tree is not merely rebasing of patches
> > > (which I already do on a daily basis) as it also involves extra coordination,
> > > testing, updates etc.
> >
> > Coordination with whom? If people develop off your pata tree, then there
> > should be no difference.
>
> Coordination between trees.
>
> Moreover people often develop against linux-next (this is perfectly
> fine with the current development model) which after change would mean
> that their patches could end up being dependent also on block (more
> work for me to sort it out).

And the difference being? The block tree is in -next in the first place.
This changeset is not yet, since I haven't had time to do testing on it
yet. But the tested stuff is usually there for each iteration.

> > > Really, no more IDE workload on my side is possible and this is a fact not
> > > something to be discussed about (unless someone is willing to help with IDE
> > > maintainance tasks or sponsor my kernel work).
> >
> > Rate of ide/ changes is pretty high, so I'd say you are doing quite well
> > on that account.
>
> The historical rate of change have very little to do with the fact
> that I'm currently very time constrained.

You didn't mention this being a current problem, but point taken.

> > > > The only sane way to handle conflicts like this is from the bottom and
> > > > up.
> > > >
> > > > You could try a more helpful approach, Bart.
> > >
> > > Well, see my initial reply. I proposed the middle-point approach
> > > which would spread an extra effort across all parties involved and
> > > should also result in a better review/testing of changes...
> >
> > That approach makes sense for more involved changes. Honestly, all it
> > would do in this case is slow things down and create more work for Tejun
> > or me. Potentially a lot, at least a lot more than the little extra
> > effort it would be to rebase the pata tree.
>
> It could be "the little extra effort" after things settle down but not
> for the transition period so asking me to do it now is just plain
> wrong. Sorry but we are week or so before merge window and I have to
> prepare for it!

I'm not _asking_ you to do anything, I'm suggesting ways we can improve
this process. The alternative is basically to sort it out post merge.
And even that is probably going to take LESS time than this email
discussion has already consumed, given the amount of conflicting
material we are dealing with for this merge. Just a reminder, this is
the IDE related diff for this particular series of changes:

drivers/ide/ide-cd.c | 16 -
drivers/ide/ide-disk.c | 1
drivers/ide/ide-ioctls.c | 1
drivers/ide/ide-park.c | 7

A conservative estimate would put that at 1 minute of merge work. Can we
please just drop this waste of time discussion on the current merge? The
important bit is how we handle this in the future, if we have larger
overlapping changes.

> > Bart, we use this approach all the time with the SCSI branch and it has
> > worked fine. It's not going to change for the ide tree. It's
> > contradictory to fanning out work at the ends.
>
> I can look into separate trees (or maybe just having one tree based on block
> tree like discussed before) after 2.6.30-rc1/2.

Proceed at your own leisure.

--
Jens Axboe

Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

On Sunday 15 March 2009, Jens Axboe wrote:
> On Sun, Mar 15 2009, Bartlomiej Zolnierkiewicz wrote:
> > > > > > Thanks for the hint but it sounds like a major pain once you hit some
> > > > > > changes touching the same code areas that block patches do...
> > > > > >
> > > > > > Besides this is guaranteed to inrease the workload on my side so it
> > > > > > won't happen simply because of -ENOTIME.
> > > > >
> > > > > When things collide, it is more work for everyone. But such is life for
> > > > > middle/core layer changes. Rebasing _really_ should not be a lot of
> > > > > work. And you are going to have to do it sooner or later, either upfront
> > > > > or after your patches stop applying because the block changes went
> > > > > upstream.
> > > >
> > > > The task of running the secondary tree is not merely rebasing of patches
> > > > (which I already do on a daily basis) as it also involves extra coordination,
> > > > testing, updates etc.
> > >
> > > Coordination with whom? If people develop off your pata tree, then there
> > > should be no difference.
> >
> > Coordination between trees.
> >
> > Moreover people often develop against linux-next (this is perfectly
> > fine with the current development model) which after change would mean
> > that their patches could end up being dependent also on block (more
> > work for me to sort it out).
>
> And the difference being? The block tree is in -next in the first place.
> This changeset is not yet, since I haven't had time to do testing on it
> yet. But the tested stuff is usually there for each iteration.

pata tree is based on Linus' tree _not_ on linux-next and this is very
handy when it comes to preparing pull requests.

It could be that I worry needlessly but with ~170 patches in the tree
currently, lack of time and merge window around the corner there is
no wonder that I'm reluctant to any experiments. However I completely
agree that we should look into the ways of improving the process in
the longer-term.

Thanks,
Bart

2009-03-16 01:16:13

by Tejun Heo

[permalink] [raw]
Subject: Re: [GIT PATCH] block: cleanup patches

Jens Axboe wrote:
> On Sat, Mar 14 2009, Tejun Heo wrote:
>> Tejun Heo wrote:
>>> Hello,
>>>
>>> This patchset is available in the following git tree.
>>>
>>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
>>>
>>> This patchset contains the following 14 cleanup patches.
>> Jens, once you're okay with the rest of the changes, I'll remove IDE
>> related stuff and repost the patchset.
>
> I only gave it a quick glance, looks mostly OK. Though I don't like
> putting elv_ prefixed stuff in blk-core. I agree it belongs there
> though, it's really block layer functionality and not io scheduler
> interfacing. So rename it blk_ instead and put compatability names in
> the header file instead.

Yeap, that exactly is what's planned (the peek/fetch patches :-).

> I'll give it a more thorough review and integration test tomorrow.

Thanks.

--
tejun

2009-03-16 01:40:05

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 11/14] block: implement and use [__]blk_end_request_all()

Hello, Bart, Jens.

Bartlomiej Zolnierkiewicz wrote:
> pata tree is based on Linus' tree _not_ on linux-next and this is very
> handy when it comes to preparing pull requests.
>
> It could be that I worry needlessly but with ~170 patches in the tree
> currently, lack of time and merge window around the corner there is
> no wonder that I'm reluctant to any experiments. However I completely
> agree that we should look into the ways of improving the process in
> the longer-term.

quilt tree is fine for leaf tree changes but problems occur when other
changes need to be based off the tree as is the case here. Is it
possible for you to provide mid-synchronization git commits? It means
starting up a new quilt tree after each synchronization point (as rc1
merge points would) but it could be an acceptable mid-point for both
trees if the number of sync points aren't too many.

Thanks.

--
tejun

2009-03-16 07:22:23

by Jens Axboe

[permalink] [raw]
Subject: Re: [GIT PATCH] block: cleanup patches

On Mon, Mar 16 2009, Tejun Heo wrote:
> Jens Axboe wrote:
> > On Sat, Mar 14 2009, Tejun Heo wrote:
> >> Tejun Heo wrote:
> >>> Hello,
> >>>
> >>> This patchset is available in the following git tree.
> >>>
> >>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
> >>>
> >>> This patchset contains the following 14 cleanup patches.
> >> Jens, once you're okay with the rest of the changes, I'll remove IDE
> >> related stuff and repost the patchset.
> >
> > I only gave it a quick glance, looks mostly OK. Though I don't like
> > putting elv_ prefixed stuff in blk-core. I agree it belongs there
> > though, it's really block layer functionality and not io scheduler
> > interfacing. So rename it blk_ instead and put compatability names in
> > the header file instead.
>
> Yeap, that exactly is what's planned (the peek/fetch patches :-).

I figured that is where it's going :-)
How far along with that are you? We are getting really close to the
merge window, so it basically has to be ready _now_ to hit 2.6.30.

--
Jens Axboe

2009-03-16 07:53:41

by Tejun Heo

[permalink] [raw]
Subject: Re: [GIT PATCH] block: cleanup patches

Jens Axboe wrote:
> On Mon, Mar 16 2009, Tejun Heo wrote:
>> Jens Axboe wrote:
>>> On Sat, Mar 14 2009, Tejun Heo wrote:
>>>> Tejun Heo wrote:
>>>>> Hello,
>>>>>
>>>>> This patchset is available in the following git tree.
>>>>>
>>>>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
>>>>>
>>>>> This patchset contains the following 14 cleanup patches.
>>>> Jens, once you're okay with the rest of the changes, I'll remove IDE
>>>> related stuff and repost the patchset.
>>> I only gave it a quick glance, looks mostly OK. Though I don't like
>>> putting elv_ prefixed stuff in blk-core. I agree it belongs there
>>> though, it's really block layer functionality and not io scheduler
>>> interfacing. So rename it blk_ instead and put compatability names in
>>> the header file instead.
>> Yeap, that exactly is what's planned (the peek/fetch patches :-).
>
> I figured that is where it's going :-)
> How far along with that are you? We are getting really close to the
> merge window, so it basically has to be ready _now_ to hit 2.6.30.

The peek/fetch patchset is almost ready. They just need to be
refreshed for for-2.6.30. I'm currently trying to clean up rq->data,
->buffer and data length handling and was thinking about putting
peek/fetch patchset on top of those, which I don't think will be ready
soon enough for 2.6.30 merge window. Given the nature of these
patches, I think we can wait for 31 window?

Thanks.

--
tejun

2009-03-16 07:58:10

by Jens Axboe

[permalink] [raw]
Subject: Re: [GIT PATCH] block: cleanup patches

On Mon, Mar 16 2009, Tejun Heo wrote:
> Jens Axboe wrote:
> > On Mon, Mar 16 2009, Tejun Heo wrote:
> >> Jens Axboe wrote:
> >>> On Sat, Mar 14 2009, Tejun Heo wrote:
> >>>> Tejun Heo wrote:
> >>>>> Hello,
> >>>>>
> >>>>> This patchset is available in the following git tree.
> >>>>>
> >>>>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
> >>>>>
> >>>>> This patchset contains the following 14 cleanup patches.
> >>>> Jens, once you're okay with the rest of the changes, I'll remove IDE
> >>>> related stuff and repost the patchset.
> >>> I only gave it a quick glance, looks mostly OK. Though I don't like
> >>> putting elv_ prefixed stuff in blk-core. I agree it belongs there
> >>> though, it's really block layer functionality and not io scheduler
> >>> interfacing. So rename it blk_ instead and put compatability names in
> >>> the header file instead.
> >> Yeap, that exactly is what's planned (the peek/fetch patches :-).
> >
> > I figured that is where it's going :-)
> > How far along with that are you? We are getting really close to the
> > merge window, so it basically has to be ready _now_ to hit 2.6.30.
>
> The peek/fetch patchset is almost ready. They just need to be
> refreshed for for-2.6.30. I'm currently trying to clean up rq->data,
> ->buffer and data length handling and was thinking about putting
> peek/fetch patchset on top of those, which I don't think will be ready
> soon enough for 2.6.30 merge window. Given the nature of these
> patches, I think we can wait for 31 window?

Yeah, lets hold those off for 2.6.31. We definitely want plenty of time
to review and test such patches.

--
Jens Axboe

2009-03-16 09:14:00

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH 09/14] block: clean up request completion API

Tejun Heo wrote:
> Impact: cleanup, rq->*nr_sectors always updated after req completion
>
> Request completion has gone through several changes and became a bit
> messy over the time. Clean it up.
>
> 1. end_that_request_data() is a thin wrapper around
> end_that_request_data_first() which checks whether bio is NULL
> before doing anything and handles bidi completion.
> blk_update_request() is a thin wrapper around
> end_that_request_data() which clears nr_sectors on the last
> iteration but doesn't use the bidi completion.
>
> Clean it up by moving the initial bio NULL check and nr_sectors
> clearing on the last iteration into end_that_request_data() and
> renaming it to blk_update_request(), which makes blk_end_io() the
> only user of end_that_request_data(). Collapse
> end_that_request_data() into blk_end_io().
>
> 2. There are four visible completion variants - blk_end_request(),
> __blk_end_request(), blk_end_bidi_request() and end_request().
> blk_end_request() and blk_end_bidi_request() uses blk_end_request()
> as the backend but __blk_end_request() and end_request() use
> separate implementation in __blk_end_request() due to different
> locking rules.
>
> Make blk_end_io() handle both cases thus all four public completion
> functions are thin wrappers around blk_end_io(). Rename
> blk_end_io() to __blk_end_io() and export it and inline all public
> completion functions.
>
> 3. As the whole request issue/completion usages are about to be
> modified and audited, it's a good chance to convert completion
> functions return bool which better indicates the intended meaning
> of return values.
>
> 4. The function name end_that_request_last() is from the days when it
> was a public interface and slighly confusing. Give it a proper
> internal name - finish_request().
>
> The only visible behavior change is from #1. nr_sectors counts are
> cleared after the final iteration no matter which function is used to
> complete the request. I couldn't find any place where the code
> assumes those nr_sectors counters contain the values for the last
> segment and this change is good as it makes the API much more
> consistent as the end result is now same whether a request is
> completed using [__]blk_end_request() alone or in combination with
> blk_update_request().
>
> Signed-off-by: Tejun Heo <[email protected]>

Reviewed-by: Boaz Harrosh <[email protected]>

Amen! this patch brings light to my life ;) thanks

I have ran with these patches and they work as expected, perfectly.

I have a small request below, if it's OK?

> ---
> block/blk-core.c | 215 ++++++++++++------------------------------------
> include/linux/blkdev.h | 114 +++++++++++++++++++++++---
> 2 files changed, 154 insertions(+), 175 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 9595c4f..b1781dd 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1798,25 +1798,35 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
> }
>
> /**
> - * __end_that_request_first - end I/O on a request
> - * @req: the request being processed
> + * blk_update_request - Special helper function for request stacking drivers
> + * @rq: the request being processed
> * @error: %0 for success, < %0 for error
> - * @nr_bytes: number of bytes to complete
> + * @nr_bytes: number of bytes to complete @rq
> *
> * Description:
> - * Ends I/O on a number of bytes attached to @req, and sets it up
> - * for the next range of segments (if any) in the cluster.
> + * Ends I/O on a number of bytes attached to @rq, but doesn't complete
> + * the request structure even if @rq doesn't have leftover.
> + * If @rq has leftover, sets it up for the next range of segments.
> + *
> + * This special helper function is only for request stacking drivers
> + * (e.g. request-based dm) so that they can handle partial completion.
> + * Actual device drivers should use blk_end_request instead.
> + *
> + * Passing the result of blk_rq_bytes() as @nr_bytes guarantees
> + * %false return from this function.
> *
> * Return:
> - * %0 - we are done with this request, call end_that_request_last()
> - * %1 - still buffers pending for this request
> + * %false - this request doesn't have any more data
> + * %true - this request has more data
> **/
> -static int __end_that_request_first(struct request *req, int error,
> - int nr_bytes)
> +bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
> {
> int total_bytes, bio_nbytes, next_idx = 0;
> struct bio *bio;
>
> + if (!req->bio)
> + return false;
> +
> trace_block_rq_complete(req->q, req);
>
> /*
> @@ -1889,8 +1899,16 @@ static int __end_that_request_first(struct request *req, int error,
> /*
> * completely done
> */
> - if (!req->bio)
> - return 0;
> + if (!req->bio) {
> + /*
> + * Reset counters so that the request stacking driver
> + * can find how many bytes remain in the request
> + * later.
> + */
> + req->nr_sectors = req->hard_nr_sectors = 0;
> + req->current_nr_sectors = req->hard_cur_sectors = 0;
> + return false;
> + }
>
> /*
> * if the request wasn't completed, update state
> @@ -1904,29 +1922,14 @@ static int __end_that_request_first(struct request *req, int error,
>
> blk_recalc_rq_sectors(req, total_bytes >> 9);
> blk_recalc_rq_segments(req);
> - return 1;
> -}
> -
> -static int end_that_request_data(struct request *rq, int error,
> - unsigned int nr_bytes, unsigned int bidi_bytes)
> -{
> - if (rq->bio) {
> - if (__end_that_request_first(rq, error, nr_bytes))
> - return 1;
> -
> - /* Bidi request must be completed as a whole */
> - if (blk_bidi_rq(rq) &&
> - __end_that_request_first(rq->next_rq, error, bidi_bytes))
> - return 1;
> - }
> -
> - return 0;
> + return true;
> }
> +EXPORT_SYMBOL_GPL(blk_update_request);
>
> /*
> * queue lock must be held
> */
> -static void end_that_request_last(struct request *req, int error)
> +static void finish_request(struct request *req, int error)
> {
> if (blk_rq_tagged(req))
> blk_queue_end_tag(req->q, req);
> @@ -1952,161 +1955,47 @@ static void end_that_request_last(struct request *req, int error)
> }
>
> /**
> - * blk_end_io - Generic end_io function to complete a request.
> + * __blk_end_io - Generic end_io function to complete a request.
> * @rq: the request being processed
> * @error: %0 for success, < %0 for error
> * @nr_bytes: number of bytes to complete @rq
> * @bidi_bytes: number of bytes to complete @rq->next_rq
> + * @locked: whether rq->q->queue_lock is held on entry
> *
> * Description:
> * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
> * If @rq has leftover, sets it up for the next range of segments.
> *
> * Return:
> - * %0 - we are done with this request
> - * %1 - this request is not freed yet, it still has pending buffers.
> + * %false - we are done with this request
> + * %true - this request is not freed yet, it still has pending buffers.
> **/
> -static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
> - unsigned int bidi_bytes)
> +bool __blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
> + unsigned int bidi_bytes, bool locked)
> {
> struct request_queue *q = rq->q;
> unsigned long flags = 0UL;
>
> - if (end_that_request_data(rq, error, nr_bytes, bidi_bytes))
> - return 1;
> -
> - add_disk_randomness(rq->rq_disk);
> -
> - spin_lock_irqsave(q->queue_lock, flags);
> - end_that_request_last(rq, error);
> - spin_unlock_irqrestore(q->queue_lock, flags);
> -
> - return 0;
> -}
> + if (blk_update_request(rq, error, nr_bytes))
> + return true;
>
> -/**
> - * blk_end_request - Helper function for drivers to complete the request.
> - * @rq: the request being processed
> - * @error: %0 for success, < %0 for error
> - * @nr_bytes: number of bytes to complete
> - *
> - * Description:
> - * Ends I/O on a number of bytes attached to @rq.
> - * If @rq has leftover, sets it up for the next range of segments.
> - *
> - * Return:
> - * %0 - we are done with this request
> - * %1 - still buffers pending for this request
> - **/
> -int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
> -{
> - return blk_end_io(rq, error, nr_bytes, 0);
> -}
> -EXPORT_SYMBOL_GPL(blk_end_request);
> -
> -/**
> - * __blk_end_request - Helper function for drivers to complete the request.
> - * @rq: the request being processed
> - * @error: %0 for success, < %0 for error
> - * @nr_bytes: number of bytes to complete
> - *
> - * Description:
> - * Must be called with queue lock held unlike blk_end_request().
> - *
> - * Return:
> - * %0 - we are done with this request
> - * %1 - still buffers pending for this request
> - **/
> -int __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
> -{
> - if (rq->bio && __end_that_request_first(rq, error, nr_bytes))
> - return 1;
> + /* Bidi request must be completed as a whole */
> + if (unlikely(blk_bidi_rq(rq)) &&
> + blk_update_request(rq->next_rq, error, bidi_bytes))
> + return true;
>
> add_disk_randomness(rq->rq_disk);
>
> - end_that_request_last(rq, error);
> -
> - return 0;
> -}
> -EXPORT_SYMBOL_GPL(__blk_end_request);
> -
> -/**
> - * blk_end_bidi_request - Helper function for drivers to complete bidi request.
> - * @rq: the bidi request being processed
> - * @error: %0 for success, < %0 for error
> - * @nr_bytes: number of bytes to complete @rq
> - * @bidi_bytes: number of bytes to complete @rq->next_rq
> - *
> - * Description:
> - * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
> - *
> - * Return:
> - * %0 - we are done with this request
> - * %1 - still buffers pending for this request
> - **/
> -int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
> - unsigned int bidi_bytes)
> -{
> - return blk_end_io(rq, error, nr_bytes, bidi_bytes);
> -}
> -EXPORT_SYMBOL_GPL(blk_end_bidi_request);
> -
> -/**
> - * end_request - end I/O on the current segment of the request
> - * @req: the request being processed
> - * @uptodate: error value or %0/%1 uptodate flag
> - *
> - * Description:
> - * Ends I/O on the current segment of a request. If that is the only
> - * remaining segment, the request is also completed and freed.
> - *
> - * This is a remnant of how older block drivers handled I/O completions.
> - * Modern drivers typically end I/O on the full request in one go, unless
> - * they have a residual value to account for. For that case this function
> - * isn't really useful, unless the residual just happens to be the
> - * full current segment. In other words, don't use this function in new
> - * code. Use blk_end_request() or __blk_end_request() to end a request.
> - **/
> -void end_request(struct request *req, int uptodate)
> -{
> - int error = 0;
> -
> - if (uptodate <= 0)
> - error = uptodate ? uptodate : -EIO;
> -
> - __blk_end_request(req, error, req->hard_cur_sectors << 9);
> -}
> -EXPORT_SYMBOL(end_request);
> + if (!locked) {
> + spin_lock_irqsave(q->queue_lock, flags);
> + finish_request(rq, error);
> + spin_unlock_irqrestore(q->queue_lock, flags);
> + } else
> + finish_request(rq, error);
>
> -/**
> - * blk_update_request - Special helper function for request stacking drivers
> - * @rq: the request being processed
> - * @error: %0 for success, < %0 for error
> - * @nr_bytes: number of bytes to complete @rq
> - *
> - * Description:
> - * Ends I/O on a number of bytes attached to @rq, but doesn't complete
> - * the request structure even if @rq doesn't have leftover.
> - * If @rq has leftover, sets it up for the next range of segments.
> - *
> - * This special helper function is only for request stacking drivers
> - * (e.g. request-based dm) so that they can handle partial completion.
> - * Actual device drivers should use blk_end_request instead.
> - */
> -void blk_update_request(struct request *rq, int error, unsigned int nr_bytes)
> -{
> - if (!end_that_request_data(rq, error, nr_bytes, 0)) {
> - /*
> - * These members are not updated in end_that_request_data()
> - * when all bios are completed.
> - * Update them so that the request stacking driver can find
> - * how many bytes remain in the request later.
> - */
> - rq->nr_sectors = rq->hard_nr_sectors = 0;
> - rq->current_nr_sectors = rq->hard_cur_sectors = 0;
> - }
> + return false;
> }
> -EXPORT_SYMBOL_GPL(blk_update_request);
> +EXPORT_SYMBOL_GPL(__blk_end_io);
>
> void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
> struct bio *bio)
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index e8175c8..cb2f9ae 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -818,27 +818,117 @@ extern unsigned int blk_rq_bytes(struct request *rq);
> extern unsigned int blk_rq_cur_bytes(struct request *rq);
>
> /*
> - * blk_end_request() and friends.
> - * __blk_end_request() and end_request() must be called with
> - * the request queue spinlock acquired.
> + * Request completion related functions.
> + *
> + * blk_update_request() completes given number of bytes and updates
> + * the request without completing it.
> + *
> + * blk_end_request() and friends. __blk_end_request() and
> + * end_request() must be called with the request queue spinlock
> + * acquired.
> *
> * Several drivers define their own end_request and call
> * blk_end_request() for parts of the original function.
> * This prevents code duplication in drivers.
> */
> -extern int blk_end_request(struct request *rq, int error,
> - unsigned int nr_bytes);
> -extern int __blk_end_request(struct request *rq, int error,
> - unsigned int nr_bytes);
> -extern int blk_end_bidi_request(struct request *rq, int error,
> - unsigned int nr_bytes, unsigned int bidi_bytes);
> -extern void end_request(struct request *, int);
> +extern bool blk_update_request(struct request *rq, int error,
> + unsigned int nr_bytes);
> +
> +/* internal function, subject to change, don't ever use directly */
> +extern bool __blk_end_io(struct request *rq, int error,
> + unsigned int nr_bytes, unsigned int bidi_bytes,
> + bool locked);
> +
> +/**
> + * blk_end_request - Helper function for drivers to complete the request.
> + * @rq: the request being processed
> + * @error: %0 for success, < %0 for error
> + * @nr_bytes: number of bytes to complete
> + *
> + * Description:
> + * Ends I/O on a number of bytes attached to @rq.
> + * If @rq has leftover, sets it up for the next range of segments.
> + *
> + * Return:
> + * %false - we are done with this request
> + * %true - still buffers pending for this request
> + **/
> +static inline bool blk_end_request(struct request *rq, int error,
> + unsigned int nr_bytes)
> +{
> + return __blk_end_io(rq, error, nr_bytes, 0, false);
> +}
> +
> +/**
> + * __blk_end_request - Helper function for drivers to complete the request.
> + * @rq: the request being processed
> + * @error: %0 for success, < %0 for error
> + * @nr_bytes: number of bytes to complete
> + *
> + * Description:
> + * Must be called with queue lock held unlike blk_end_request().
> + *
> + * Return:
> + * %false - we are done with this request
> + * %true - still buffers pending for this request
> + **/
> +static inline bool __blk_end_request(struct request *rq, int error,
> + unsigned int nr_bytes)
> +{
> + return __blk_end_io(rq, error, nr_bytes, 0, true);
> +}
> +
> +/**
> + * blk_end_bidi_request - Helper function for drivers to complete bidi request.
> + * @rq: the bidi request being processed
> + * @error: %0 for success, < %0 for error
> + * @nr_bytes: number of bytes to complete @rq
> + * @bidi_bytes: number of bytes to complete @rq->next_rq
> + *
> + * Description:
> + * Ends I/O on a number of bytes attached to @rq and @rq->next_rq.

+ * Drivers that supports bidi can safely call this member for any type
+ * of request, bidi or uni. In the later case @bidi_bytes is just
+ * ignored.

Please add some comment like above, the subject arose a few times before
where programmers thought they need to do an if() and call this or above
blk_end_request.

(I have such a documentation change in my Q)
> + *
> + * Return:
> + * %false - we are done with this request
> + * %true - still buffers pending for this request
> + **/
> +static inline bool blk_end_bidi_request(struct request *rq, int error,
> + unsigned int nr_bytes,
> + unsigned int bidi_bytes)
> +{
> + return __blk_end_io(rq, error, nr_bytes, bidi_bytes, false);
> +}
> +
> +/**
> + * end_request - end I/O on the current segment of the request
> + * @rq: the request being processed
> + * @uptodate: error value or %0/%1 uptodate flag
> + *
> + * Description:
> + * Ends I/O on the current segment of a request. If that is the only
> + * remaining segment, the request is also completed and freed.
> + *
> + * This is a remnant of how older block drivers handled I/O completions.
> + * Modern drivers typically end I/O on the full request in one go, unless
> + * they have a residual value to account for. For that case this function
> + * isn't really useful, unless the residual just happens to be the
> + * full current segment. In other words, don't use this function in new
> + * code. Use blk_end_request() or __blk_end_request() to end a request.
> + **/
> +static inline void end_request(struct request *rq, int uptodate)
> +{
> + int error = 0;
> +
> + if (uptodate <= 0)
> + error = uptodate ? uptodate : -EIO;
> +
> + __blk_end_io(rq, error, rq->hard_cur_sectors << 9, 0, true);
> +}
> +
> extern void blk_complete_request(struct request *);
> extern void __blk_complete_request(struct request *);
> extern void blk_abort_request(struct request *);
> extern void blk_abort_queue(struct request_queue *);
> -extern void blk_update_request(struct request *rq, int error,
> - unsigned int nr_bytes);
>
> /*
> * Access functions for manipulating queue properties

2009-03-16 09:56:53

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 09/14] block: clean up request completion API

Boaz Harrosh wrote:
> I have a small request below, if it's OK?

Sure.

>
> + * Drivers that supports bidi can safely call this member for any type
> + * of request, bidi or uni. In the later case @bidi_bytes is just
> + * ignored.
>
> Please add some comment like above, the subject arose a few times before
> where programmers thought they need to do an if() and call this or above
> blk_end_request.

Comment added as suggested and patch description updated accordingly.
Git tree has been rebased. New commit ID is
5b437c316dcdb30598a73b03f529df89e880ac15.

Thanks.

--
tejun