2010-08-27 17:17:38

by Tejun Heo

[permalink] [raw]
Subject: [PATCHSET 2.6.36-rc2] block, dm: finish REQ_FLUSH/FUA conversion

Hello,

This patchset converts dm to REQ_FLUSH/FUA, which was the last
remaining one. I've put patches on top of the previous patches for
easier review. The series can be trivially reordered so that the
order is more logical. Jens, please let me know if you want it to be
reordered.

The dm conversion is _lightly_ tested. Please proceed with caution.
In particular, I haven't tested dm implementation at all. So, it
probably is best to hold off merging these bits until dm people can
verify the conversion is correct.

0001-block-make-__blk_rq_prep_clone-copy-most-command-fla.patch
0002-dm-implement-REQ_FLUSH-FUA-support.patch
0003-dm-relax-ordering-of-bio-based-flush-implementation.patch
0004-block-remove-the-WRITE_BARRIER-flag.patch

Differences from the previous attempt[1] are,

* I was wrong when I wrote that flush retry logic was dropped. What
got dropped was -EOPNOTSUPP handling. The retry behavior used by
multipath remains the same. As blindly retrying flushes could be
dangerous, a FIXME comment is added.

* bio-based dm now also advertises REQ_FLUSH | REQ_FUA capability as
block layer now filters out REQ_FLUSH if not supported by the queue.

* 0002-dm-implement-REQ_FLUSH-FUA-support.patch added to update
__blk_rq_prep_clone() to copy most REQ_* flags including FLUSH and
FUA and request-based dm now advertises REQ_FLUSH | REQ_FUA support.

* dm_request_fn() now explicitly plugs other requests while a flush
request is in progress. This is to avoid starving flush sequence
which uses device draining to check whether each step is complete.

* 0003-block-remove-the-WRITE_BARRIER-flag.patch added to drop
needless ordering around flush handling.

This patchset is on top of "block, fs: replace HARDBARRIER with
FLUSH/FUA" patchset[2] and available in the following git tree

git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git flush-fua

and contains the following changes.

block/blk-core.c | 4
drivers/md/dm-crypt.c | 2
drivers/md/dm-io.c | 20 --
drivers/md/dm-log.c | 2
drivers/md/dm-raid1.c | 8 -
drivers/md/dm-region-hash.c | 16 +-
drivers/md/dm-snap-persistent.c | 2
drivers/md/dm-snap.c | 6
drivers/md/dm-stripe.c | 2
drivers/md/dm.c | 277 +++++++++++++++-------------------------
include/linux/blk_types.h | 1
include/linux/fs.h | 3
12 files changed, 131 insertions(+), 212 deletions(-)

Thanks.

--
tejun

[1] http://thread.gmane.org/gmane.linux.raid/29100/focus=29104
[2] http://thread.gmane.org/gmane.linux.kernel/1022363


2010-08-27 17:17:26

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 4/4] block: remove the WRITE_BARRIER flag

From: Christoph Hellwig <[email protected]>

It's unused now.

Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
---
include/linux/fs.h | 3 ---
1 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 32703a9..6b0f6e9 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -135,7 +135,6 @@ struct inodes_stat_t {
* immediately after submission. The write equivalent
* of READ_SYNC.
* WRITE_ODIRECT_PLUG Special case write for O_DIRECT only.
- * WRITE_BARRIER DEPRECATED. Always fails. Use FLUSH/FUA instead.
* WRITE_FLUSH Like WRITE_SYNC but with preceding cache flush.
* WRITE_FUA Like WRITE_SYNC but data is guaranteed to be on
* non-volatile media on completion.
@@ -157,8 +156,6 @@ struct inodes_stat_t {
#define WRITE_SYNC (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_UNPLUG)
#define WRITE_ODIRECT_PLUG (WRITE | REQ_SYNC)
#define WRITE_META (WRITE | REQ_META)
-#define WRITE_BARRIER (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_UNPLUG | \
- REQ_HARDBARRIER)
#define WRITE_FLUSH (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_UNPLUG | \
REQ_FLUSH)
#define WRITE_FUA (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_UNPLUG | \
--
1.7.1

2010-08-27 17:17:33

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 1/4] block: make __blk_rq_prep_clone() copy most command flags

Currently __blk_rq_prep_clone() copies only REQ_WRITE and REQ_DISCARD.
There's no reason to omit other command flags and REQ_FUA needs to be
copied to implement FUA support in request-based dm.

REQ_COMMON_MASK which specifies flags to be copied from bio to request
already identifies all the command flags. Define REQ_CLONE_MASK to be
the same as REQ_COMMON_MASK for clarity and make __blk_rq_prep_clone()
copy all flags in the mask.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 4 +---
include/linux/blk_types.h | 1 +
2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 495bdc4..2a5b192 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2505,9 +2505,7 @@ EXPORT_SYMBOL_GPL(blk_rq_unprep_clone);
static void __blk_rq_prep_clone(struct request *dst, struct request *src)
{
dst->cpu = src->cpu;
- dst->cmd_flags = (rq_data_dir(src) | REQ_NOMERGE);
- if (src->cmd_flags & REQ_DISCARD)
- dst->cmd_flags |= REQ_DISCARD;
+ dst->cmd_flags = (src->cmd_flags & REQ_CLONE_MASK) | REQ_NOMERGE;
dst->cmd_type = src->cmd_type;
dst->__sector = blk_rq_pos(src);
dst->__data_len = blk_rq_bytes(src);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 1797994..36edadf 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -168,6 +168,7 @@ enum rq_flag_bits {
#define REQ_COMMON_MASK \
(REQ_WRITE | REQ_FAILFAST_MASK | REQ_HARDBARRIER | REQ_SYNC | \
REQ_META | REQ_DISCARD | REQ_NOIDLE | REQ_FLUSH | REQ_FUA)
+#define REQ_CLONE_MASK REQ_COMMON_MASK

#define REQ_UNPLUG (1 << __REQ_UNPLUG)
#define REQ_RAHEAD (1 << __REQ_RAHEAD)
--
1.7.1

2010-08-27 17:17:36

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 3/4] dm: relax ordering of bio-based flush implementation

Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA doesn't mandate any ordering
against other bio's. This patch relaxes ordering around flushes.

* A flush bio is no longer deferred to workqueue directly. It's
processed like other bio's but __split_and_process_bio() uses
md->flush_bio as the clone source. md->flush_bio is initialized to
empty flush during md initialization and shared for all flushes.

* When dec_pending() detects that a flush has completed, it checks
whether the original bio has data. If so, the bio is queued to the
deferred list w/ REQ_FLUSH cleared; otherwise, it's completed.

* As flush sequencing is handled in the usual issue/completion path,
dm_wq_work() no longer needs to handle flushes differently. Now its
only responsibility is re-issuing deferred bio's the same way as
_dm_request() would. REQ_FLUSH handling logic including
process_flush() is dropped.

* There's no reason for queue_io() and dm_wq_work() write lock
dm->io_lock. queue_io() now only uses md->deferred_lock and
dm_wq_work() read locks dm->io_lock.

* bio's no longer need to be queued on the deferred list while a flush
is in progress making DMF_QUEUE_IO_TO_THREAD unncessary. Drop it.

This avoids stalling the device during flushes and simplifies the
implementation.

Signed-off-by: Tejun Heo <[email protected]>
---
drivers/md/dm.c | 152 ++++++++++++++++--------------------------------------
1 files changed, 45 insertions(+), 107 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 52f4033..308c35b 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -110,7 +110,6 @@ EXPORT_SYMBOL_GPL(dm_get_rq_mapinfo);
#define DMF_FREEING 3
#define DMF_DELETING 4
#define DMF_NOFLUSH_SUSPENDING 5
-#define DMF_QUEUE_IO_TO_THREAD 6

/*
* Work processed by per-device workqueue.
@@ -528,16 +527,10 @@ static void end_io_acct(struct dm_io *io)
*/
static void queue_io(struct mapped_device *md, struct bio *bio)
{
- down_write(&md->io_lock);
-
spin_lock_irq(&md->deferred_lock);
bio_list_add(&md->deferred, bio);
spin_unlock_irq(&md->deferred_lock);
-
- if (!test_and_set_bit(DMF_QUEUE_IO_TO_THREAD, &md->flags))
- queue_work(md->wq, &md->work);
-
- up_write(&md->io_lock);
+ queue_work(md->wq, &md->work);
}

/*
@@ -625,11 +618,9 @@ static void dec_pending(struct dm_io *io, int error)
* Target requested pushing back the I/O.
*/
spin_lock_irqsave(&md->deferred_lock, flags);
- if (__noflush_suspending(md)) {
- if (!(io->bio->bi_rw & REQ_FLUSH))
- bio_list_add_head(&md->deferred,
- io->bio);
- } else
+ if (__noflush_suspending(md))
+ bio_list_add_head(&md->deferred, io->bio);
+ else
/* noflush suspend was interrupted. */
io->error = -EIO;
spin_unlock_irqrestore(&md->deferred_lock, flags);
@@ -637,26 +628,22 @@ static void dec_pending(struct dm_io *io, int error)

io_error = io->error;
bio = io->bio;
+ end_io_acct(io);
+ free_io(md, io);

- if (bio->bi_rw & REQ_FLUSH) {
+ if (io_error == DM_ENDIO_REQUEUE)
+ return;
+
+ if (!(bio->bi_rw & REQ_FLUSH) || !bio->bi_size) {
+ trace_block_bio_complete(md->queue, bio);
+ bio_endio(bio, io_error);
+ } else {
/*
- * There can be just one flush request so we use
- * a per-device variable for error reporting.
- * Note that you can't touch the bio after end_io_acct
+ * Preflush done for flush with data, reissue
+ * without REQ_FLUSH.
*/
- if (!md->flush_error)
- md->flush_error = io_error;
- end_io_acct(io);
- free_io(md, io);
- } else {
- end_io_acct(io);
- free_io(md, io);
-
- if (io_error != DM_ENDIO_REQUEUE) {
- trace_block_bio_complete(md->queue, bio);
-
- bio_endio(bio, io_error);
- }
+ bio->bi_rw &= ~REQ_FLUSH;
+ queue_io(md, bio);
}
}
}
@@ -1369,21 +1356,17 @@ static int __clone_and_map(struct clone_info *ci)
*/
static void __split_and_process_bio(struct mapped_device *md, struct bio *bio)
{
+ bool is_flush = bio->bi_rw & REQ_FLUSH;
struct clone_info ci;
int error = 0;

ci.map = dm_get_live_table(md);
if (unlikely(!ci.map)) {
- if (!(bio->bi_rw & REQ_FLUSH))
- bio_io_error(bio);
- else
- if (!md->flush_error)
- md->flush_error = -EIO;
+ bio_io_error(bio);
return;
}

ci.md = md;
- ci.bio = bio;
ci.io = alloc_io(md);
ci.io->error = 0;
atomic_set(&ci.io->io_count, 1);
@@ -1391,18 +1374,19 @@ static void __split_and_process_bio(struct mapped_device *md, struct bio *bio)
ci.io->md = md;
spin_lock_init(&ci.io->endio_lock);
ci.sector = bio->bi_sector;
- if (!(bio->bi_rw & REQ_FLUSH))
+ ci.idx = bio->bi_idx;
+
+ if (!is_flush) {
+ ci.bio = bio;
ci.sector_count = bio_sectors(bio);
- else {
- /* all FLUSH bio's reaching here should be empty */
- WARN_ON_ONCE(bio_has_data(bio));
+ } else {
+ ci.bio = &ci.md->flush_bio;
ci.sector_count = 1;
}
- ci.idx = bio->bi_idx;

start_io_acct(ci.io);
while (ci.sector_count && !error) {
- if (!(bio->bi_rw & REQ_FLUSH))
+ if (!is_flush)
error = __clone_and_map(&ci);
else
error = __clone_and_map_flush(&ci);
@@ -1490,22 +1474,14 @@ static int _dm_request(struct request_queue *q, struct bio *bio)
part_stat_add(cpu, &dm_disk(md)->part0, sectors[rw], bio_sectors(bio));
part_stat_unlock();

- /*
- * If we're suspended or the thread is processing flushes
- * we have to queue this io for later.
- */
- if (unlikely(test_bit(DMF_QUEUE_IO_TO_THREAD, &md->flags)) ||
- (bio->bi_rw & REQ_FLUSH)) {
+ /* if we're suspended, we have to queue this io for later */
+ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
up_read(&md->io_lock);

- if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) &&
- bio_rw(bio) == READA) {
+ if (bio_rw(bio) != READA)
+ queue_io(md, bio);
+ else
bio_io_error(bio);
- return 0;
- }
-
- queue_io(md, bio);
-
return 0;
}

@@ -2018,6 +1994,10 @@ static struct mapped_device *alloc_dev(int minor)
if (!md->bdev)
goto bad_bdev;

+ bio_init(&md->flush_bio);
+ md->flush_bio.bi_bdev = md->bdev;
+ md->flush_bio.bi_rw = WRITE_FLUSH;
+
/* Populate the mapping, nobody knows we exist yet */
spin_lock(&_minor_lock);
old_md = idr_replace(&_minor_idr, md, minor);
@@ -2409,37 +2389,6 @@ static int dm_wait_for_completion(struct mapped_device *md, int interruptible)
return r;
}

-static void process_flush(struct mapped_device *md, struct bio *bio)
-{
- md->flush_error = 0;
-
- /* handle REQ_FLUSH */
- dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
-
- bio_init(&md->flush_bio);
- md->flush_bio.bi_bdev = md->bdev;
- md->flush_bio.bi_rw = WRITE_FLUSH;
- __split_and_process_bio(md, &md->flush_bio);
-
- dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
-
- /* if it's an empty flush or the preflush failed, we're done */
- if (!bio_has_data(bio) || md->flush_error) {
- if (md->flush_error != DM_ENDIO_REQUEUE)
- bio_endio(bio, md->flush_error);
- else {
- spin_lock_irq(&md->deferred_lock);
- bio_list_add_head(&md->deferred, bio);
- spin_unlock_irq(&md->deferred_lock);
- }
- return;
- }
-
- /* issue data + REQ_FUA */
- bio->bi_rw &= ~REQ_FLUSH;
- __split_and_process_bio(md, bio);
-}
-
/*
* Process the deferred bios
*/
@@ -2449,33 +2398,27 @@ static void dm_wq_work(struct work_struct *work)
work);
struct bio *c;

- down_write(&md->io_lock);
+ down_read(&md->io_lock);

while (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) {
spin_lock_irq(&md->deferred_lock);
c = bio_list_pop(&md->deferred);
spin_unlock_irq(&md->deferred_lock);

- if (!c) {
- clear_bit(DMF_QUEUE_IO_TO_THREAD, &md->flags);
+ if (!c)
break;
- }

- up_write(&md->io_lock);
+ up_read(&md->io_lock);

if (dm_request_based(md))
generic_make_request(c);
- else {
- if (c->bi_rw & REQ_FLUSH)
- process_flush(md, c);
- else
- __split_and_process_bio(md, c);
- }
+ else
+ __split_and_process_bio(md, c);

- down_write(&md->io_lock);
+ down_read(&md->io_lock);
}

- up_write(&md->io_lock);
+ up_read(&md->io_lock);
}

static void dm_queue_flush(struct mapped_device *md)
@@ -2674,17 +2617,12 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
*
* To get all processes out of __split_and_process_bio in dm_request,
* we take the write lock. To prevent any process from reentering
- * __split_and_process_bio from dm_request, we set
- * DMF_QUEUE_IO_TO_THREAD.
- *
- * To quiesce the thread (dm_wq_work), we set DMF_BLOCK_IO_FOR_SUSPEND
- * and call flush_workqueue(md->wq). flush_workqueue will wait until
- * dm_wq_work exits and DMF_BLOCK_IO_FOR_SUSPEND will prevent any
- * further calls to __split_and_process_bio from dm_wq_work.
+ * __split_and_process_bio from dm_request and quiesce the thread
+ * (dm_wq_work), we set BMF_BLOCK_IO_FOR_SUSPEND and call
+ * flush_workqueue(md->wq).
*/
down_write(&md->io_lock);
set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags);
- set_bit(DMF_QUEUE_IO_TO_THREAD, &md->flags);
up_write(&md->io_lock);

/*
--
1.7.1

2010-08-27 17:17:40

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 2/4] dm: implement REQ_FLUSH/FUA support

This patch converts dm to support REQ_FLUSH/FUA instead of now
deprecated REQ_HARDBARRIER.

For common parts,

* Barrier -EOPNOTSUPP handling is dropped from dm-io. This doesn't
modify the usual retry on error logic. Note that retrying FLUSHes
which are failed by device can be dangerous as sometimes they imply
that data is already lost. FIXME comment added.

* Empty WRITE_BARRIERs are replaced with WRITE_FLUSHes.

* It's now guaranteed that all FLUSH bio's which are passed onto dm
targets are zero length. bio_empty_barrier() tests are replaced
with REQ_FLUSH tests.

* Dropped unlikely() around REQ_FLUSH tests. Flushes are not unlikely
enough to be marked with unlikely().

For bio-based dm,

* Preflush is handled as before but postflush is dropped and replaced
with passing down REQ_FUA to member request_queues. This replaces
one array wide cache flush w/ member specific FUA writes.

* __split_and_process_bio() now calls __clone_and_map_flush() directly
for flushes and guarantees all FLUSH bio's going to targets are zero
length.

* -EOPNOTSUPP retry logic dropped.

* Block layer now filters out REQ_FLUSH/FUA bio's if the request_queue
doesn't support cache flushing. Advertise REQ_FLUSH | REQ_FUA
capability.

For request-based dm,

* Request-based dm uses dm_wait_for_completion() to sequence barrier
which depends on block layer suppressing other requests while a
flush sequence is in progress. Block layer will keep issuing other
requests while a flush/fua is in progress. Implement the
suppressing directly in dm by making dm_request_fn() plug the queue
if a flush/fua is in progress. Other than this, FLUSH
implementation can be used as-is.

* As __blk_rq_prep_clone() copies REQ_FUA, just advertising FUA
support is enough to pass through REQ_FUA to targets.

* As all flushes are REQ_TYPE_FS, there's no reason to check whether a
REQ_TYPE_BLOCK_PC request is barrier/flush. A superflous check
removed from dm_end_request() as suggested by Christoph.

Lightly tested linear, stripe, raid1, snap and crypt targets. Please
proceed with caution as I'm not familiar with the code base.

Signed-off-by: Tejun Heo <[email protected]>
Cc: [email protected]
Cc: Christoph Hellwig <[email protected]>
---
drivers/md/dm-crypt.c | 2 +-
drivers/md/dm-io.c | 20 +---
drivers/md/dm-log.c | 2 +-
drivers/md/dm-raid1.c | 8 +-
drivers/md/dm-region-hash.c | 16 ++--
drivers/md/dm-snap-persistent.c | 2 +-
drivers/md/dm-snap.c | 6 +-
drivers/md/dm-stripe.c | 2 +-
drivers/md/dm.c | 201 +++++++++++++++++++--------------------
9 files changed, 122 insertions(+), 137 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 368e8e9..d5b0e4c 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1278,7 +1278,7 @@ static int crypt_map(struct dm_target *ti, struct bio *bio,
struct dm_crypt_io *io;
struct crypt_config *cc;

- if (unlikely(bio_empty_barrier(bio))) {
+ if (bio->bi_rw & REQ_FLUSH) {
cc = ti->private;
bio->bi_bdev = cc->dev->bdev;
return DM_MAPIO_REMAPPED;
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index 0590c75..136d4f7 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -31,7 +31,6 @@ struct dm_io_client {
*/
struct io {
unsigned long error_bits;
- unsigned long eopnotsupp_bits;
atomic_t count;
struct task_struct *sleeper;
struct dm_io_client *client;
@@ -130,11 +129,8 @@ static void retrieve_io_and_region_from_bio(struct bio *bio, struct io **io,
*---------------------------------------------------------------*/
static void dec_count(struct io *io, unsigned int region, int error)
{
- if (error) {
+ if (error)
set_bit(region, &io->error_bits);
- if (error == -EOPNOTSUPP)
- set_bit(region, &io->eopnotsupp_bits);
- }

if (atomic_dec_and_test(&io->count)) {
if (io->sleeper)
@@ -310,8 +306,8 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where,
sector_t remaining = where->count;

/*
- * where->count may be zero if rw holds a write barrier and we
- * need to send a zero-sized barrier.
+ * where->count may be zero if rw holds a flush and we need to
+ * send a zero-sized flush.
*/
do {
/*
@@ -364,7 +360,7 @@ static void dispatch_io(int rw, unsigned int num_regions,
*/
for (i = 0; i < num_regions; i++) {
*dp = old_pages;
- if (where[i].count || (rw & REQ_HARDBARRIER))
+ if (where[i].count || (rw & REQ_FLUSH))
do_region(rw, i, where + i, dp, io);
}

@@ -393,9 +389,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
return -EIO;
}

-retry:
io->error_bits = 0;
- io->eopnotsupp_bits = 0;
atomic_set(&io->count, 1); /* see dispatch_io() */
io->sleeper = current;
io->client = client;
@@ -412,11 +406,6 @@ retry:
}
set_current_state(TASK_RUNNING);

- if (io->eopnotsupp_bits && (rw & REQ_HARDBARRIER)) {
- rw &= ~REQ_HARDBARRIER;
- goto retry;
- }
-
if (error_bits)
*error_bits = io->error_bits;

@@ -437,7 +426,6 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions,

io = mempool_alloc(client->pool, GFP_NOIO);
io->error_bits = 0;
- io->eopnotsupp_bits = 0;
atomic_set(&io->count, 1); /* see dispatch_io() */
io->sleeper = NULL;
io->client = client;
diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
index 5a08be0..33420e6 100644
--- a/drivers/md/dm-log.c
+++ b/drivers/md/dm-log.c
@@ -300,7 +300,7 @@ static int flush_header(struct log_c *lc)
.count = 0,
};

- lc->io_req.bi_rw = WRITE_BARRIER;
+ lc->io_req.bi_rw = WRITE_FLUSH;

return dm_io(&lc->io_req, 1, &null_location, NULL);
}
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index 7c081bc..19a59b0 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -259,7 +259,7 @@ static int mirror_flush(struct dm_target *ti)
struct dm_io_region io[ms->nr_mirrors];
struct mirror *m;
struct dm_io_request io_req = {
- .bi_rw = WRITE_BARRIER,
+ .bi_rw = WRITE_FLUSH,
.mem.type = DM_IO_KMEM,
.mem.ptr.bvec = NULL,
.client = ms->io_client,
@@ -629,7 +629,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
struct dm_io_region io[ms->nr_mirrors], *dest = io;
struct mirror *m;
struct dm_io_request io_req = {
- .bi_rw = WRITE | (bio->bi_rw & WRITE_BARRIER),
+ .bi_rw = WRITE | (bio->bi_rw & WRITE_FLUSH_FUA),
.mem.type = DM_IO_BVEC,
.mem.ptr.bvec = bio->bi_io_vec + bio->bi_idx,
.notify.fn = write_callback,
@@ -670,7 +670,7 @@ static void do_writes(struct mirror_set *ms, struct bio_list *writes)
bio_list_init(&requeue);

while ((bio = bio_list_pop(writes))) {
- if (unlikely(bio_empty_barrier(bio))) {
+ if (bio->bi_rw & REQ_FLUSH) {
bio_list_add(&sync, bio);
continue;
}
@@ -1203,7 +1203,7 @@ static int mirror_end_io(struct dm_target *ti, struct bio *bio,
* We need to dec pending if this was a write.
*/
if (rw == WRITE) {
- if (likely(!bio_empty_barrier(bio)))
+ if (!(bio->bi_rw & REQ_FLUSH))
dm_rh_dec(ms->rh, map_context->ll);
return error;
}
diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c
index bd5c58b..dad011a 100644
--- a/drivers/md/dm-region-hash.c
+++ b/drivers/md/dm-region-hash.c
@@ -81,9 +81,9 @@ struct dm_region_hash {
struct list_head failed_recovered_regions;

/*
- * If there was a barrier failure no regions can be marked clean.
+ * If there was a flush failure no regions can be marked clean.
*/
- int barrier_failure;
+ int flush_failure;

void *context;
sector_t target_begin;
@@ -217,7 +217,7 @@ struct dm_region_hash *dm_region_hash_create(
INIT_LIST_HEAD(&rh->quiesced_regions);
INIT_LIST_HEAD(&rh->recovered_regions);
INIT_LIST_HEAD(&rh->failed_recovered_regions);
- rh->barrier_failure = 0;
+ rh->flush_failure = 0;

rh->region_pool = mempool_create_kmalloc_pool(MIN_REGIONS,
sizeof(struct dm_region));
@@ -399,8 +399,8 @@ void dm_rh_mark_nosync(struct dm_region_hash *rh, struct bio *bio)
region_t region = dm_rh_bio_to_region(rh, bio);
int recovering = 0;

- if (bio_empty_barrier(bio)) {
- rh->barrier_failure = 1;
+ if (bio->bi_rw & REQ_FLUSH) {
+ rh->flush_failure = 1;
return;
}

@@ -524,7 +524,7 @@ void dm_rh_inc_pending(struct dm_region_hash *rh, struct bio_list *bios)
struct bio *bio;

for (bio = bios->head; bio; bio = bio->bi_next) {
- if (bio_empty_barrier(bio))
+ if (bio->bi_rw & REQ_FLUSH)
continue;
rh_inc(rh, dm_rh_bio_to_region(rh, bio));
}
@@ -555,9 +555,9 @@ void dm_rh_dec(struct dm_region_hash *rh, region_t region)
*/

/* do nothing for DM_RH_NOSYNC */
- if (unlikely(rh->barrier_failure)) {
+ if (unlikely(rh->flush_failure)) {
/*
- * If a write barrier failed some time ago, we
+ * If a write flush failed some time ago, we
* don't know whether or not this write made it
* to the disk, so we must resync the device.
*/
diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index cc2bdb8..0b61792 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -687,7 +687,7 @@ static void persistent_commit_exception(struct dm_exception_store *store,
/*
* Commit exceptions to disk.
*/
- if (ps->valid && area_io(ps, WRITE_BARRIER))
+ if (ps->valid && area_io(ps, WRITE_FLUSH_FUA))
ps->valid = 0;

/*
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 5974d30..eed2101 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -1587,7 +1587,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio,
chunk_t chunk;
struct dm_snap_pending_exception *pe = NULL;

- if (unlikely(bio_empty_barrier(bio))) {
+ if (bio->bi_rw & REQ_FLUSH) {
bio->bi_bdev = s->cow->bdev;
return DM_MAPIO_REMAPPED;
}
@@ -1691,7 +1691,7 @@ static int snapshot_merge_map(struct dm_target *ti, struct bio *bio,
int r = DM_MAPIO_REMAPPED;
chunk_t chunk;

- if (unlikely(bio_empty_barrier(bio))) {
+ if (bio->bi_rw & REQ_FLUSH) {
if (!map_context->target_request_nr)
bio->bi_bdev = s->origin->bdev;
else
@@ -2135,7 +2135,7 @@ static int origin_map(struct dm_target *ti, struct bio *bio,
struct dm_dev *dev = ti->private;
bio->bi_bdev = dev->bdev;

- if (unlikely(bio_empty_barrier(bio)))
+ if (bio->bi_rw & REQ_FLUSH)
return DM_MAPIO_REMAPPED;

/* Only tell snapshots if this is a write */
diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
index c297f6d..f0371b4 100644
--- a/drivers/md/dm-stripe.c
+++ b/drivers/md/dm-stripe.c
@@ -271,7 +271,7 @@ static int stripe_map(struct dm_target *ti, struct bio *bio,
uint32_t stripe;
unsigned target_request_nr;

- if (unlikely(bio_empty_barrier(bio))) {
+ if (bio->bi_rw & REQ_FLUSH) {
target_request_nr = map_context->target_request_nr;
BUG_ON(target_request_nr >= sc->stripes);
bio->bi_bdev = sc->stripe[target_request_nr].dev->bdev;
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index b1d92be..52f4033 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -144,21 +144,21 @@ struct mapped_device {
spinlock_t deferred_lock;

/*
- * An error from the barrier request currently being processed.
+ * An error from the flush request currently being processed.
*/
- int barrier_error;
+ int flush_error;

/*
- * Protect barrier_error from concurrent endio processing
+ * Protect flush_error from concurrent endio processing
* in request-based dm.
*/
- spinlock_t barrier_error_lock;
+ spinlock_t flush_error_lock;

/*
- * Processing queue (flush/barriers)
+ * Processing queue (flush)
*/
struct workqueue_struct *wq;
- struct work_struct barrier_work;
+ struct work_struct flush_work;

/* A pointer to the currently processing pre/post flush request */
struct request *flush_request;
@@ -200,8 +200,8 @@ struct mapped_device {
/* sysfs handle */
struct kobject kobj;

- /* zero-length barrier that will be cloned and submitted to targets */
- struct bio barrier_bio;
+ /* zero-length flush that will be cloned and submitted to targets */
+ struct bio flush_bio;
};

/*
@@ -512,7 +512,7 @@ static void end_io_acct(struct dm_io *io)

/*
* After this is decremented the bio must not be touched if it is
- * a barrier.
+ * a flush.
*/
dm_disk(md)->part0.in_flight[rw] = pending =
atomic_dec_return(&md->pending[rw]);
@@ -626,7 +626,7 @@ static void dec_pending(struct dm_io *io, int error)
*/
spin_lock_irqsave(&md->deferred_lock, flags);
if (__noflush_suspending(md)) {
- if (!(io->bio->bi_rw & REQ_HARDBARRIER))
+ if (!(io->bio->bi_rw & REQ_FLUSH))
bio_list_add_head(&md->deferred,
io->bio);
} else
@@ -638,20 +638,14 @@ static void dec_pending(struct dm_io *io, int error)
io_error = io->error;
bio = io->bio;

- if (bio->bi_rw & REQ_HARDBARRIER) {
+ if (bio->bi_rw & REQ_FLUSH) {
/*
- * There can be just one barrier request so we use
+ * There can be just one flush request so we use
* a per-device variable for error reporting.
* Note that you can't touch the bio after end_io_acct
- *
- * We ignore -EOPNOTSUPP for empty flush reported by
- * underlying devices. We assume that if the device
- * doesn't support empty barriers, it doesn't need
- * cache flushing commands.
*/
- if (!md->barrier_error &&
- !(bio_empty_barrier(bio) && io_error == -EOPNOTSUPP))
- md->barrier_error = io_error;
+ if (!md->flush_error)
+ md->flush_error = io_error;
end_io_acct(io);
free_io(md, io);
} else {
@@ -755,21 +749,23 @@ static void end_clone_bio(struct bio *clone, int error)
blk_update_request(tio->orig, 0, nr_bytes);
}

-static void store_barrier_error(struct mapped_device *md, int error)
+static void store_flush_error(struct mapped_device *md, int error)
{
unsigned long flags;

- spin_lock_irqsave(&md->barrier_error_lock, flags);
+ spin_lock_irqsave(&md->flush_error_lock, flags);
/*
- * Basically, the first error is taken, but:
- * -EOPNOTSUPP supersedes any I/O error.
- * Requeue request supersedes any I/O error but -EOPNOTSUPP.
+ * Basically, the first error is taken, but requeue request
+ * supersedes any I/O error.
+ *
+ * FIXME: Depending on device and error type, a FLUSH failure
+ * might imply that data is already lost. This needs to be
+ * updated such that hard FLUSH IO failures trump requeue
+ * requests, not the other way around.
*/
- if (!md->barrier_error || error == -EOPNOTSUPP ||
- (md->barrier_error != -EOPNOTSUPP &&
- error == DM_ENDIO_REQUEUE))
- md->barrier_error = error;
- spin_unlock_irqrestore(&md->barrier_error_lock, flags);
+ if (!md->flush_error || error == DM_ENDIO_REQUEUE)
+ md->flush_error = error;
+ spin_unlock_irqrestore(&md->flush_error_lock, flags);
}

/*
@@ -810,12 +806,11 @@ static void dm_end_request(struct request *clone, int error)
{
int rw = rq_data_dir(clone);
int run_queue = 1;
- bool is_barrier = clone->cmd_flags & REQ_HARDBARRIER;
struct dm_rq_target_io *tio = clone->end_io_data;
struct mapped_device *md = tio->md;
struct request *rq = tio->orig;

- if (rq->cmd_type == REQ_TYPE_BLOCK_PC && !is_barrier) {
+ if (rq->cmd_type == REQ_TYPE_BLOCK_PC) {
rq->errors = clone->errors;
rq->resid_len = clone->resid_len;

@@ -830,9 +825,9 @@ static void dm_end_request(struct request *clone, int error)

free_rq_clone(clone);

- if (unlikely(is_barrier)) {
+ if (clone->cmd_flags & REQ_FLUSH) {
if (unlikely(error))
- store_barrier_error(md, error);
+ store_flush_error(md, error);
run_queue = 0;
} else
blk_end_request_all(rq, error);
@@ -862,9 +857,9 @@ void dm_requeue_unmapped_request(struct request *clone)
struct request_queue *q = rq->q;
unsigned long flags;

- if (unlikely(clone->cmd_flags & REQ_HARDBARRIER)) {
+ if (clone->cmd_flags & REQ_FLUSH) {
/*
- * Barrier clones share an original request.
+ * Flush clones share an original request.
* Leave it to dm_end_request(), which handles this special
* case.
*/
@@ -961,14 +956,14 @@ static void dm_complete_request(struct request *clone, int error)
struct dm_rq_target_io *tio = clone->end_io_data;
struct request *rq = tio->orig;

- if (unlikely(clone->cmd_flags & REQ_HARDBARRIER)) {
+ if (clone->cmd_flags & REQ_FLUSH) {
/*
- * Barrier clones share an original request. So can't use
+ * Flush clones share an original request. So can't use
* softirq_done with the original.
* Pass the clone to dm_done() directly in this special case.
* It is safe (even if clone->q->queue_lock is held here)
* because there is no I/O dispatching during the completion
- * of barrier clone.
+ * of flush clone.
*/
dm_done(clone, error, true);
return;
@@ -990,9 +985,9 @@ void dm_kill_unmapped_request(struct request *clone, int error)
struct dm_rq_target_io *tio = clone->end_io_data;
struct request *rq = tio->orig;

- if (unlikely(clone->cmd_flags & REQ_HARDBARRIER)) {
+ if (clone->cmd_flags & REQ_FLUSH) {
/*
- * Barrier clones share an original request.
+ * Flush clones share an original request.
* Leave it to dm_end_request(), which handles this special
* case.
*/
@@ -1119,7 +1114,7 @@ static void dm_bio_destructor(struct bio *bio)
}

/*
- * Creates a little bio that is just does part of a bvec.
+ * Creates a little bio that just does part of a bvec.
*/
static struct bio *split_bvec(struct bio *bio, sector_t sector,
unsigned short idx, unsigned int offset,
@@ -1134,7 +1129,7 @@ static struct bio *split_bvec(struct bio *bio, sector_t sector,

clone->bi_sector = sector;
clone->bi_bdev = bio->bi_bdev;
- clone->bi_rw = bio->bi_rw & ~REQ_HARDBARRIER;
+ clone->bi_rw = bio->bi_rw;
clone->bi_vcnt = 1;
clone->bi_size = to_bytes(len);
clone->bi_io_vec->bv_offset = offset;
@@ -1161,7 +1156,6 @@ static struct bio *clone_bio(struct bio *bio, sector_t sector,

clone = bio_alloc_bioset(GFP_NOIO, bio->bi_max_vecs, bs);
__bio_clone(clone, bio);
- clone->bi_rw &= ~REQ_HARDBARRIER;
clone->bi_destructor = dm_bio_destructor;
clone->bi_sector = sector;
clone->bi_idx = idx;
@@ -1225,7 +1219,7 @@ static void __issue_target_requests(struct clone_info *ci, struct dm_target *ti,
__issue_target_request(ci, ti, request_nr, len);
}

-static int __clone_and_map_empty_barrier(struct clone_info *ci)
+static int __clone_and_map_flush(struct clone_info *ci)
{
unsigned target_nr = 0;
struct dm_target *ti;
@@ -1289,9 +1283,6 @@ static int __clone_and_map(struct clone_info *ci)
sector_t len = 0, max;
struct dm_target_io *tio;

- if (unlikely(bio_empty_barrier(bio)))
- return __clone_and_map_empty_barrier(ci);
-
if (unlikely(bio->bi_rw & REQ_DISCARD))
return __clone_and_map_discard(ci);

@@ -1383,11 +1374,11 @@ static void __split_and_process_bio(struct mapped_device *md, struct bio *bio)

ci.map = dm_get_live_table(md);
if (unlikely(!ci.map)) {
- if (!(bio->bi_rw & REQ_HARDBARRIER))
+ if (!(bio->bi_rw & REQ_FLUSH))
bio_io_error(bio);
else
- if (!md->barrier_error)
- md->barrier_error = -EIO;
+ if (!md->flush_error)
+ md->flush_error = -EIO;
return;
}

@@ -1400,14 +1391,22 @@ static void __split_and_process_bio(struct mapped_device *md, struct bio *bio)
ci.io->md = md;
spin_lock_init(&ci.io->endio_lock);
ci.sector = bio->bi_sector;
- ci.sector_count = bio_sectors(bio);
- if (unlikely(bio_empty_barrier(bio)))
+ if (!(bio->bi_rw & REQ_FLUSH))
+ ci.sector_count = bio_sectors(bio);
+ else {
+ /* all FLUSH bio's reaching here should be empty */
+ WARN_ON_ONCE(bio_has_data(bio));
ci.sector_count = 1;
+ }
ci.idx = bio->bi_idx;

start_io_acct(ci.io);
- while (ci.sector_count && !error)
- error = __clone_and_map(&ci);
+ while (ci.sector_count && !error) {
+ if (!(bio->bi_rw & REQ_FLUSH))
+ error = __clone_and_map(&ci);
+ else
+ error = __clone_and_map_flush(&ci);
+ }

/* drop the extra reference count */
dec_pending(ci.io, error);
@@ -1492,11 +1491,11 @@ static int _dm_request(struct request_queue *q, struct bio *bio)
part_stat_unlock();

/*
- * If we're suspended or the thread is processing barriers
+ * If we're suspended or the thread is processing flushes
* we have to queue this io for later.
*/
if (unlikely(test_bit(DMF_QUEUE_IO_TO_THREAD, &md->flags)) ||
- unlikely(bio->bi_rw & REQ_HARDBARRIER)) {
+ (bio->bi_rw & REQ_FLUSH)) {
up_read(&md->io_lock);

if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) &&
@@ -1595,7 +1594,7 @@ static int setup_clone(struct request *clone, struct request *rq,
if (dm_rq_is_flush_request(rq)) {
blk_rq_init(NULL, clone);
clone->cmd_type = REQ_TYPE_FS;
- clone->cmd_flags |= (REQ_HARDBARRIER | WRITE);
+ clone->cmd_flags |= WRITE_FLUSH;
} else {
r = blk_rq_prep_clone(clone, rq, tio->md->bs, GFP_ATOMIC,
dm_rq_bio_constructor, tio);
@@ -1735,15 +1734,18 @@ static void dm_request_fn(struct request_queue *q)
* dm_suspend().
*/
while (!blk_queue_plugged(q) && !blk_queue_stopped(q)) {
+ if (md->flush_request)
+ goto plug_and_out;
+
rq = blk_peek_request(q);
if (!rq)
goto plug_and_out;

- if (unlikely(dm_rq_is_flush_request(rq))) {
+ if (dm_rq_is_flush_request(rq)) {
BUG_ON(md->flush_request);
md->flush_request = rq;
blk_start_request(rq);
- queue_work(md->wq, &md->barrier_work);
+ queue_work(md->wq, &md->flush_work);
goto out;
}

@@ -1918,7 +1920,7 @@ out:
static const struct block_device_operations dm_blk_dops;

static void dm_wq_work(struct work_struct *work);
-static void dm_rq_barrier_work(struct work_struct *work);
+static void dm_rq_flush_work(struct work_struct *work);

static void dm_init_md_queue(struct mapped_device *md)
{
@@ -1940,6 +1942,7 @@ static void dm_init_md_queue(struct mapped_device *md)
blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY);
md->queue->unplug_fn = dm_unplug_all;
blk_queue_merge_bvec(md->queue, dm_merge_bvec);
+ blk_queue_flush(md->queue, REQ_FLUSH | REQ_FUA);
}

/*
@@ -1972,7 +1975,7 @@ static struct mapped_device *alloc_dev(int minor)
mutex_init(&md->suspend_lock);
mutex_init(&md->type_lock);
spin_lock_init(&md->deferred_lock);
- spin_lock_init(&md->barrier_error_lock);
+ spin_lock_init(&md->flush_error_lock);
rwlock_init(&md->map_lock);
atomic_set(&md->holders, 1);
atomic_set(&md->open_count, 0);
@@ -1995,7 +1998,7 @@ static struct mapped_device *alloc_dev(int minor)
atomic_set(&md->pending[1], 0);
init_waitqueue_head(&md->wait);
INIT_WORK(&md->work, dm_wq_work);
- INIT_WORK(&md->barrier_work, dm_rq_barrier_work);
+ INIT_WORK(&md->flush_work, dm_rq_flush_work);
init_waitqueue_head(&md->eventq);

md->disk->major = _major;
@@ -2245,7 +2248,7 @@ static int dm_init_request_based_queue(struct mapped_device *md)
blk_queue_softirq_done(md->queue, dm_softirq_done);
blk_queue_prep_rq(md->queue, dm_prep_fn);
blk_queue_lld_busy(md->queue, dm_lld_busy);
- blk_queue_flush(md->queue, REQ_FLUSH);
+ blk_queue_flush(md->queue, REQ_FLUSH | REQ_FUA);

elv_register_queue(md->queue);

@@ -2406,41 +2409,35 @@ static int dm_wait_for_completion(struct mapped_device *md, int interruptible)
return r;
}

-static void dm_flush(struct mapped_device *md)
+static void process_flush(struct mapped_device *md, struct bio *bio)
{
- dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
-
- bio_init(&md->barrier_bio);
- md->barrier_bio.bi_bdev = md->bdev;
- md->barrier_bio.bi_rw = WRITE_BARRIER;
- __split_and_process_bio(md, &md->barrier_bio);
+ md->flush_error = 0;

+ /* handle REQ_FLUSH */
dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
-}

-static void process_barrier(struct mapped_device *md, struct bio *bio)
-{
- md->barrier_error = 0;
+ bio_init(&md->flush_bio);
+ md->flush_bio.bi_bdev = md->bdev;
+ md->flush_bio.bi_rw = WRITE_FLUSH;
+ __split_and_process_bio(md, &md->flush_bio);

- dm_flush(md);
+ dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);

- if (!bio_empty_barrier(bio)) {
- __split_and_process_bio(md, bio);
- /*
- * If the request isn't supported, don't waste time with
- * the second flush.
- */
- if (md->barrier_error != -EOPNOTSUPP)
- dm_flush(md);
+ /* if it's an empty flush or the preflush failed, we're done */
+ if (!bio_has_data(bio) || md->flush_error) {
+ if (md->flush_error != DM_ENDIO_REQUEUE)
+ bio_endio(bio, md->flush_error);
+ else {
+ spin_lock_irq(&md->deferred_lock);
+ bio_list_add_head(&md->deferred, bio);
+ spin_unlock_irq(&md->deferred_lock);
+ }
+ return;
}

- if (md->barrier_error != DM_ENDIO_REQUEUE)
- bio_endio(bio, md->barrier_error);
- else {
- spin_lock_irq(&md->deferred_lock);
- bio_list_add_head(&md->deferred, bio);
- spin_unlock_irq(&md->deferred_lock);
- }
+ /* issue data + REQ_FUA */
+ bio->bi_rw &= ~REQ_FLUSH;
+ __split_and_process_bio(md, bio);
}

/*
@@ -2469,8 +2466,8 @@ static void dm_wq_work(struct work_struct *work)
if (dm_request_based(md))
generic_make_request(c);
else {
- if (c->bi_rw & REQ_HARDBARRIER)
- process_barrier(md, c);
+ if (c->bi_rw & REQ_FLUSH)
+ process_flush(md, c);
else
__split_and_process_bio(md, c);
}
@@ -2495,8 +2492,8 @@ static void dm_rq_set_target_request_nr(struct request *clone, unsigned request_
tio->info.target_request_nr = request_nr;
}

-/* Issue barrier requests to targets and wait for their completion. */
-static int dm_rq_barrier(struct mapped_device *md)
+/* Issue flush requests to targets and wait for their completion. */
+static int dm_rq_flush(struct mapped_device *md)
{
int i, j;
struct dm_table *map = dm_get_live_table(md);
@@ -2504,7 +2501,7 @@ static int dm_rq_barrier(struct mapped_device *md)
struct dm_target *ti;
struct request *clone;

- md->barrier_error = 0;
+ md->flush_error = 0;

for (i = 0; i < num_targets; i++) {
ti = dm_table_get_target(map, i);
@@ -2519,26 +2516,26 @@ static int dm_rq_barrier(struct mapped_device *md)
dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
dm_table_put(map);

- return md->barrier_error;
+ return md->flush_error;
}

-static void dm_rq_barrier_work(struct work_struct *work)
+static void dm_rq_flush_work(struct work_struct *work)
{
int error;
struct mapped_device *md = container_of(work, struct mapped_device,
- barrier_work);
+ flush_work);
struct request_queue *q = md->queue;
struct request *rq;
unsigned long flags;

/*
* Hold the md reference here and leave it at the last part so that
- * the md can't be deleted by device opener when the barrier request
+ * the md can't be deleted by device opener when the flush request
* completes.
*/
dm_get(md);

- error = dm_rq_barrier(md);
+ error = dm_rq_flush(md);

rq = md->flush_request;
md->flush_request = NULL;
@@ -2691,7 +2688,7 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
up_write(&md->io_lock);

/*
- * Request-based dm uses md->wq for barrier (dm_rq_barrier_work) which
+ * Request-based dm uses md->wq for flush (dm_rq_flush_work) which
* can be kicked until md->queue is stopped. So stop md->queue before
* flushing md->wq.
*/
--
1.7.1

2010-08-27 17:21:21

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET 2.6.36-rc2] block, dm: finish REQ_FLUSH/FUA conversion

On 08/27/2010 07:10 PM, Tejun Heo wrote:
> The dm conversion is _lightly_ tested. Please proceed with caution.
> In particular, I haven't tested dm implementation at all.
^^
requested-based dm

--
tejun

2010-08-27 17:57:33

by Mike Snitzer

[permalink] [raw]
Subject: Re: [PATCH 1/4] block: make __blk_rq_prep_clone() copy most command flags

On Fri, Aug 27 2010 at 1:10pm -0400,
Tejun Heo <[email protected]> wrote:

> Currently __blk_rq_prep_clone() copies only REQ_WRITE and REQ_DISCARD.
> There's no reason to omit other command flags and REQ_FUA needs to be
> copied to implement FUA support in request-based dm.
>
> REQ_COMMON_MASK which specifies flags to be copied from bio to request
> already identifies all the command flags. Define REQ_CLONE_MASK to be
> the same as REQ_COMMON_MASK for clarity and make __blk_rq_prep_clone()
> copy all flags in the mask.
>
> Signed-off-by: Tejun Heo <[email protected]>

Looks great.

Acked-by: Mike Snitzer <[email protected]>

2010-08-27 20:24:33

by Mike Snitzer

[permalink] [raw]
Subject: Re: [PATCH 2/4] dm: implement REQ_FLUSH/FUA support

On Fri, Aug 27 2010 at 1:10pm -0400,
Tejun Heo <[email protected]> wrote:

> This patch converts dm to support REQ_FLUSH/FUA instead of now
> deprecated REQ_HARDBARRIER.

Thanks for your continued work on this!

> * As __blk_rq_prep_clone() copies REQ_FUA, just advertising FUA
> support is enough to pass through REQ_FUA to targets.

You're doing blk_queue_flush(md->queue, REQ_FLUSH | REQ_FUA); in 2
places:
1) generic dm_init_md_queue -- used for bio-based and request-based
2) request-based specific dm_init_request_based_queue.

Interestingly, we never used the old blk_queue_ordered() method for
bio-based DM yet it is now using blk_queue_flush().

But how can we blindly assume/advertise REQ_FUA?

Should we be taking more care to check each block device that DM
consumes to see if FUA is supported and only then advertise REQ_FUA?
DM already does this for discard support (see:
dm_table_supports_discards).

> Lightly tested linear, stripe, raid1, snap and crypt targets.

I tested the bio-based code with the LVM2 test suite and all tests
passed.

> Please proceed with caution as I'm not familiar with the code base.

As I shared in an earlier (private) mail, I'm unfortunately having
problems with request-based DM (when all patches in this series are
applied). I'll be working on that more.

BTW, we can eliminate the dm_rq_is_flush_request() wrapper right? I
think hch mentioned this at some point in one of the various threads.

Mike

2010-08-27 23:27:57

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 2/4] dm: implement REQ_FLUSH/FUA support

Hello,

On 08/27/2010 10:24 PM, Mike Snitzer wrote:
>> * As __blk_rq_prep_clone() copies REQ_FUA, just advertising FUA
>> support is enough to pass through REQ_FUA to targets.
>
> You're doing blk_queue_flush(md->queue, REQ_FLUSH | REQ_FUA); in 2
> places:
> 1) generic dm_init_md_queue -- used for bio-based and request-based
> 2) request-based specific dm_init_request_based_queue.

Well, there are two places creating queues.

> Interestingly, we never used the old blk_queue_ordered() method for
> bio-based DM yet it is now using blk_queue_flush().

Yeap, because now __make_request() filters out REQ_FLUSH bio's.

> But how can we blindly assume/advertise REQ_FUA?
>
> Should we be taking more care to check each block device that DM
> consumes to see if FUA is supported and only then advertise REQ_FUA?
> DM already does this for discard support (see:
> dm_table_supports_discards).

Nope, REQ_FUA will be interpreted by queues lower in the stack.
Drivers in the middle just need to pass them through.

>> Lightly tested linear, stripe, raid1, snap and crypt targets.
>
> I tested the bio-based code with the LVM2 test suite and all tests
> passed.
>
>> Please proceed with caution as I'm not familiar with the code base.
>
> As I shared in an earlier (private) mail, I'm unfortunately having
> problems with request-based DM (when all patches in this series are
> applied). I'll be working on that more.

Heh... I probably should set up a simple dm-mpath and test it. I'll
do it this weekend.

> BTW, we can eliminate the dm_rq_is_flush_request() wrapper right? I
> think hch mentioned this at some point in one of the various threads.

Sure, that's a rather silly wrapper at this point.

Thanks.

--
tejun

2010-08-28 00:36:01

by Mike Snitzer

[permalink] [raw]
Subject: Re: [PATCH 2/4] dm: implement REQ_FLUSH/FUA support

On Fri, Aug 27 2010 at 7:28pm -0400,
Tejun Heo <[email protected]> wrote:

> Hello,
>
> On 08/27/2010 10:24 PM, Mike Snitzer wrote:
> >> * As __blk_rq_prep_clone() copies REQ_FUA, just advertising FUA
> >> support is enough to pass through REQ_FUA to targets.
> >
> > You're doing blk_queue_flush(md->queue, REQ_FLUSH | REQ_FUA); in 2
> > places:
> > 1) generic dm_init_md_queue -- used for bio-based and request-based
> > 2) request-based specific dm_init_request_based_queue.
>
> Well, there are two places creating queues.

Actually only alloc_dev() allocates the queue. It'll then initialize it
for bio-based use via dm_init_md_queue() -- this initialization is
subset of that done for a request-based DM device's queue. The
remaining queue initialization for rq-based is done in
dm_init_request_based_queue -- with the call to
blk_init_allocated_queue().

So the request-based queue is initialized in two stages.

dm_init_md_queue() is common to both both bio-based and request-based.
So we probably only need the one blk_queue_flush in dm_init_md_queue().

> > But how can we blindly assume/advertise REQ_FUA?
> >
> > Should we be taking more care to check each block device that DM
> > consumes to see if FUA is supported and only then advertise REQ_FUA?
> > DM already does this for discard support (see:
> > dm_table_supports_discards).
>
> Nope, REQ_FUA will be interpreted by queues lower in the stack.
> Drivers in the middle just need to pass them through.

I thought that was likely the case, thanks for clarifying.

> >> Lightly tested linear, stripe, raid1, snap and crypt targets.
> >
> > I tested the bio-based code with the LVM2 test suite and all tests
> > passed.
> >
> >> Please proceed with caution as I'm not familiar with the code base.
> >
> > As I shared in an earlier (private) mail, I'm unfortunately having
> > problems with request-based DM (when all patches in this series are
> > applied). I'll be working on that more.
>
> Heh... I probably should set up a simple dm-mpath and test it. I'll
> do it this weekend.

OK, like I mentioned earlier in one of these threads; its easy enough to
use multipath with a single scsi-debug device. That way you can also
create a discard capable multipath device without physical hardware,
e.g.:

# modprobe scsi_debug dev_size_mb=100 unmap_max_desc=16 unmap_granularity=2048 sector_size=4096
<edit multipath.conf to not blacklist all devices, also enable
'user_friendly_names yes' in defaults section>
# /etc/init.d/multipathd restart
# multipath -ll

>From here you can just format the mpath device with ext4 or whatever.

But if you want to use LVM ontop of the multipath device you'll need to
allow LVM to treat DM devices as physical volumes, see FAQ #3 here:
http://christophe.varoqui.free.fr/faq.html

It also helps to tweak your 'preferred_names' and 'filter' in lvm.conf,
e.g.:
preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
filter = [ "a|/dev/mapper/mpatha|", "a|/dev/vd.*|", "a|/dev/sdd|", "a|/dev/sde|", "r|.*|" ]

The 'lvmdiskscan' command should show the mpath device.

Now you can use that test script I provided in my earlier mail.

Mike