There are a couple new block layer commands we are trying to add support
for in the near term:
compare and write
http://www.spinics.net/lists/target-devel/msg07826.html
copy offload/extended copy/xcopy
https://www.redhat.com/archives/dm-devel/2014-July/msg00070.html
The problem is if we contine to add more commands we will have to one day
extend the cmd_flags/bi_rw fields again. To prevent that, this patchset
separates the operation (REQ_WRITE, REQ_DISCARD, REQ_WRITE_SAME, etc) from
the flags (REQ_SYNC, REQ_QUIET, etc) in the bio and request structs. In the
end of this set, we will have two fields bio->bi_op/request->op and
bio->bi_rw/request->cmd_flags.
The patches were made against Jens's linux-block tree's for-linus branch:
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git/log/?h=for-linus
(last commit a22c4d7e34402ccdf3414f64c50365436eba7b93).
I have done some basic testing for a lot of the drivers and filesystems,
but I wanted to get comments before trying to track down more hardware/
systems for testing.
Known issues:
- REQ_FLUSH is still a flag, but should probably be a operation.
For lower level drivers like SCSI where we only get a flush, it makes
more sense to be a operation. However, upper layers like filesystems
can send down flushes with writes, so it is more of a flag for them.
I am still working on this.
- There is a regression with the dm flakey target. It currently
cannot corrupt the operation values.
- The patchset is a little awkward. It touches so much code,
but I wanted to maintain git bisectibility, so there is lots of compat code
left around until the last patches where everyting is cleaned up.
From: Mike Christie <[email protected]>
This patch adds definitions for request/bio operations which
will be used in the next patches.
In the initial patches the REQ_OPs match the REQ ones for compat reasons
while all the code is converted in this set. In the last patches
that will be removed.
Signed-off-by: Mike Christie <[email protected]>
---
include/linux/blk_types.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index e813013..d7b6009 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -244,4 +244,11 @@ enum rq_flag_bits {
#define REQ_MQ_INFLIGHT (1ULL << __REQ_MQ_INFLIGHT)
#define REQ_NO_TIMEOUT (1ULL << __REQ_NO_TIMEOUT)
+enum req_op {
+ REQ_OP_READ,
+ REQ_OP_WRITE = REQ_WRITE,
+ REQ_OP_DISCARD = REQ_DISCARD,
+ REQ_OP_WRITE_SAME = REQ_WRITE_SAME,
+};
+
#endif /* __LINUX_BLK_TYPES_H */
--
1.8.3.1
From: Mike Christie <[email protected]>
This patch prepares submit_bio_wait callers for the next
patches that split bi_rw into a operation and flags field.
Instead of passing in a bitmap with both the operation and
flags mixed in, the callers now pass them in seperately.
Temp issue: When the fs.h read/write types, like WRITE_SYNC or
WRITE_FUA, are used we still pass in the operation along with the
flags in the flags argument. When all the code has been converted
that will be cleaned up. It is left in here for compat and git
bisect use and to try and make the patches smaller.
Signed-off-by: Mike Christie <[email protected]>
---
block/bio.c | 8 ++++----
block/blk-flush.c | 2 +-
drivers/md/bcache/debug.c | 4 ++--
drivers/md/md.c | 2 +-
drivers/md/raid1.c | 2 +-
drivers/md/raid10.c | 2 +-
fs/btrfs/check-integrity.c | 8 ++++----
fs/btrfs/check-integrity.h | 2 +-
fs/btrfs/extent_io.c | 2 +-
fs/btrfs/scrub.c | 6 +++---
fs/ext4/crypto.c | 2 +-
fs/f2fs/segment.c | 4 ++--
fs/hfsplus/hfsplus_fs.h | 2 +-
fs/hfsplus/part_tbl.c | 5 +++--
fs/hfsplus/super.c | 6 ++++--
fs/hfsplus/wrapper.c | 14 ++++++++------
fs/logfs/dev_bdev.c | 2 +-
include/linux/bio.h | 2 +-
kernel/power/swap.c | 30 ++++++++++++++++++------------
19 files changed, 58 insertions(+), 47 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index ad3f276..610c704 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -859,21 +859,21 @@ static void submit_bio_wait_endio(struct bio *bio)
/**
* submit_bio_wait - submit a bio, and wait until it completes
- * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
+ * @op: REQ_OP_*
+ * @flags: rq_flag_bits
* @bio: The &struct bio which describes the I/O
*
* Simple wrapper around submit_bio(). Returns 0 on success, or the error from
* bio_endio() on failure.
*/
-int submit_bio_wait(int rw, struct bio *bio)
+int submit_bio_wait(int op, int flags, struct bio *bio)
{
struct submit_bio_ret ret;
- rw |= REQ_SYNC;
init_completion(&ret.event);
bio->bi_private = &ret;
bio->bi_end_io = submit_bio_wait_endio;
- submit_bio(rw, bio);
+ submit_bio(op | flags | REQ_SYNC, bio);
wait_for_completion(&ret.event);
return ret.error;
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 9c423e5..f707ba1 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -485,7 +485,7 @@ int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask,
bio = bio_alloc(gfp_mask, 0);
bio->bi_bdev = bdev;
- ret = submit_bio_wait(WRITE_FLUSH, bio);
+ ret = submit_bio_wait(REQ_OP_WRITE, WRITE_FLUSH, bio);
/*
* The driver must store the error location in ->bi_sector, if
diff --git a/drivers/md/bcache/debug.c b/drivers/md/bcache/debug.c
index 8b1f1d5..001f5f1 100644
--- a/drivers/md/bcache/debug.c
+++ b/drivers/md/bcache/debug.c
@@ -54,7 +54,7 @@ void bch_btree_verify(struct btree *b)
bio->bi_iter.bi_size = KEY_SIZE(&v->key) << 9;
bch_bio_map(bio, sorted);
- submit_bio_wait(REQ_META|READ_SYNC, bio);
+ submit_bio_wait(REQ_OP_READ, READ_SYNC, bio);
bch_bbio_free(bio, b->c);
memcpy(ondisk, sorted, KEY_SIZE(&v->key) << 9);
@@ -117,7 +117,7 @@ void bch_data_verify(struct cached_dev *dc, struct bio *bio)
if (bio_alloc_pages(check, GFP_NOIO))
goto out_put;
- submit_bio_wait(READ_SYNC, check);
+ submit_bio_wait(REQ_OP_READ, READ_SYNC, check);
bio_for_each_segment(bv, bio, iter) {
void *p1 = kmap_atomic(bv.bv_page);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index c702de1..1ca5959 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -771,7 +771,7 @@ int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
else
bio->bi_iter.bi_sector = sector + rdev->data_offset;
bio_add_page(bio, page, size, 0);
- submit_bio_wait(rw, bio);
+ submit_bio_wait(rw, 0, bio);
ret = !bio->bi_error;
bio_put(bio);
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index d9d031e..527fdf5 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2195,7 +2195,7 @@ static int narrow_write_error(struct r1bio *r1_bio, int i)
bio_trim(wbio, sector - r1_bio->sector, sectors);
wbio->bi_iter.bi_sector += rdev->data_offset;
wbio->bi_bdev = rdev->bdev;
- if (submit_bio_wait(WRITE, wbio) < 0)
+ if (submit_bio_wait(REQ_OP_WRITE, 0, wbio) < 0)
/* failure! */
ok = rdev_set_badblocks(rdev, sector,
sectors, 0)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 96f3659..69352a6 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2470,7 +2470,7 @@ static int narrow_write_error(struct r10bio *r10_bio, int i)
choose_data_offset(r10_bio, rdev) +
(sector - r10_bio->sector));
wbio->bi_bdev = rdev->bdev;
- if (submit_bio_wait(WRITE, wbio) < 0)
+ if (submit_bio_wait(REQ_OP_WRITE, 0, wbio) < 0)
/* Failure! */
ok = rdev_set_badblocks(rdev, sector,
sectors, 0)
diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 541fbfa..fd50b2f 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -1695,7 +1695,7 @@ static int btrfsic_read_block(struct btrfsic_state *state,
"btrfsic: error, failed to add a single page!\n");
return -1;
}
- if (submit_bio_wait(READ, bio)) {
+ if (submit_bio_wait(REQ_OP_READ, 0, bio)) {
printk(KERN_INFO
"btrfsic: read error at logical %llu dev %s!\n",
block_ctx->start, block_ctx->dev->name);
@@ -3064,10 +3064,10 @@ void btrfsic_submit_bio(int rw, struct bio *bio)
submit_bio(rw, bio);
}
-int btrfsic_submit_bio_wait(int rw, struct bio *bio)
+int btrfsic_submit_bio_wait(int op, int op_flags, struct bio *bio)
{
- __btrfsic_submit_bio(rw, bio);
- return submit_bio_wait(rw, bio);
+ __btrfsic_submit_bio(op | op_flags, bio);
+ return submit_bio_wait(op, op_flags, bio);
}
int btrfsic_mount(struct btrfs_root *root,
diff --git a/fs/btrfs/check-integrity.h b/fs/btrfs/check-integrity.h
index 13b8566..13b0d54 100644
--- a/fs/btrfs/check-integrity.h
+++ b/fs/btrfs/check-integrity.h
@@ -22,7 +22,7 @@
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
int btrfsic_submit_bh(int rw, struct buffer_head *bh);
void btrfsic_submit_bio(int rw, struct bio *bio);
-int btrfsic_submit_bio_wait(int rw, struct bio *bio);
+int btrfsic_submit_bio_wait(int op, int op_flags, struct bio *bio);
#else
#define btrfsic_submit_bh submit_bh
#define btrfsic_submit_bio submit_bio
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 3915c94..9a68a83 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2071,7 +2071,7 @@ int repair_io_failure(struct inode *inode, u64 start, u64 length, u64 logical,
bio->bi_bdev = dev->bdev;
bio_add_page(bio, page, length, pg_offset);
- if (btrfsic_submit_bio_wait(WRITE_SYNC, bio)) {
+ if (btrfsic_submit_bio_wait(REQ_OP_WRITE, WRITE_SYNC, bio)) {
/* try to remap that extent elsewhere? */
bio_put(bio);
btrfs_dev_stat_inc_and_print(dev, BTRFS_DEV_STAT_WRITE_ERRS);
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index a39f5d1..9cb6f9a 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -1510,7 +1510,7 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
} else {
bio->bi_iter.bi_sector = page->physical >> 9;
- if (btrfsic_submit_bio_wait(READ, bio))
+ if (btrfsic_submit_bio_wait(REQ_OP_READ, 0, bio))
sblock->no_io_error_seen = 0;
}
@@ -1644,7 +1644,7 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
return -EIO;
}
- if (btrfsic_submit_bio_wait(WRITE, bio)) {
+ if (btrfsic_submit_bio_wait(REQ_OP_WRITE, 0, bio)) {
btrfs_dev_stat_inc_and_print(page_bad->dev,
BTRFS_DEV_STAT_WRITE_ERRS);
btrfs_dev_replace_stats_inc(
@@ -4397,7 +4397,7 @@ leave_with_eio:
return -EIO;
}
- if (btrfsic_submit_bio_wait(WRITE_SYNC, bio))
+ if (btrfsic_submit_bio_wait(REQ_OP_WRITE, WRITE_SYNC, bio))
goto leave_with_eio;
bio_put(bio);
diff --git a/fs/ext4/crypto.c b/fs/ext4/crypto.c
index 4573155..2782a95 100644
--- a/fs/ext4/crypto.c
+++ b/fs/ext4/crypto.c
@@ -444,7 +444,7 @@ int ext4_encrypted_zeroout(struct inode *inode, struct ext4_extent *ex)
bio_put(bio);
goto errout;
}
- err = submit_bio_wait(WRITE, bio);
+ err = submit_bio_wait(REQ_OP_WRITE, 0, bio);
bio_put(bio);
if (err)
goto errout;
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 78e6d06..7d2972b 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -340,7 +340,7 @@ repeat:
fcc->dispatch_list = llist_reverse_order(fcc->dispatch_list);
bio->bi_bdev = sbi->sb->s_bdev;
- ret = submit_bio_wait(WRITE_FLUSH, bio);
+ ret = submit_bio_wait(REQ_OP_WRITE, WRITE_FLUSH, bio);
llist_for_each_entry_safe(cmd, next,
fcc->dispatch_list, llnode) {
@@ -372,7 +372,7 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi)
int ret;
bio->bi_bdev = sbi->sb->s_bdev;
- ret = submit_bio_wait(WRITE_FLUSH, bio);
+ ret = submit_bio_wait(REQ_OP_WRITE, WRITE_FLUSH, bio);
bio_put(bio);
return ret;
}
diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
index f91a1fa..7288375 100644
--- a/fs/hfsplus/hfsplus_fs.h
+++ b/fs/hfsplus/hfsplus_fs.h
@@ -525,7 +525,7 @@ int hfsplus_compare_dentry(const struct dentry *parent,
/* wrapper.c */
int hfsplus_submit_bio(struct super_block *sb, sector_t sector, void *buf,
- void **data, int rw);
+ void **data, int op, int flags);
int hfsplus_read_wrapper(struct super_block *sb);
/* time macros */
diff --git a/fs/hfsplus/part_tbl.c b/fs/hfsplus/part_tbl.c
index eb355d8..63164eb 100644
--- a/fs/hfsplus/part_tbl.c
+++ b/fs/hfsplus/part_tbl.c
@@ -112,7 +112,8 @@ static int hfs_parse_new_pmap(struct super_block *sb, void *buf,
if ((u8 *)pm - (u8 *)buf >= buf_size) {
res = hfsplus_submit_bio(sb,
*part_start + HFS_PMAP_BLK + i,
- buf, (void **)&pm, READ);
+ buf, (void **)&pm, REQ_OP_READ,
+ 0);
if (res)
return res;
}
@@ -136,7 +137,7 @@ int hfs_part_find(struct super_block *sb,
return -ENOMEM;
res = hfsplus_submit_bio(sb, *part_start + HFS_PMAP_BLK,
- buf, &data, READ);
+ buf, &data, REQ_OP_READ, 0);
if (res)
goto out;
diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
index 7302d96..f9ea6aa 100644
--- a/fs/hfsplus/super.c
+++ b/fs/hfsplus/super.c
@@ -219,7 +219,8 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)
error2 = hfsplus_submit_bio(sb,
sbi->part_start + HFSPLUS_VOLHEAD_SECTOR,
- sbi->s_vhdr_buf, NULL, WRITE_SYNC);
+ sbi->s_vhdr_buf, NULL, REQ_OP_WRITE,
+ WRITE_SYNC);
if (!error)
error = error2;
if (!write_backup)
@@ -227,7 +228,8 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)
error2 = hfsplus_submit_bio(sb,
sbi->part_start + sbi->sect_count - 2,
- sbi->s_backup_vhdr_buf, NULL, WRITE_SYNC);
+ sbi->s_backup_vhdr_buf, NULL, REQ_OP_WRITE,
+ WRITE_SYNC);
if (!error)
error2 = error;
out:
diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
index cc62356..ac5750b 100644
--- a/fs/hfsplus/wrapper.c
+++ b/fs/hfsplus/wrapper.c
@@ -30,7 +30,8 @@ struct hfsplus_wd {
* @sector: block to read or write, for blocks of HFSPLUS_SECTOR_SIZE bytes
* @buf: buffer for I/O
* @data: output pointer for location of requested data
- * @rw: direction of I/O
+ * @op: direction of I/O
+ * @flags: request op flags
*
* The unit of I/O is hfsplus_min_io_size(sb), which may be bigger than
* HFSPLUS_SECTOR_SIZE, and @buf must be sized accordingly. On reads
@@ -44,7 +45,7 @@ struct hfsplus_wd {
* will work correctly.
*/
int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
- void *buf, void **data, int rw)
+ void *buf, void **data, int op, int flags)
{
struct bio *bio;
int ret = 0;
@@ -66,7 +67,7 @@ int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
bio->bi_iter.bi_sector = sector;
bio->bi_bdev = sb->s_bdev;
- if (!(rw & WRITE) && data)
+ if (!(op == WRITE) && data)
*data = (u8 *)buf + offset;
while (io_size > 0) {
@@ -83,7 +84,7 @@ int hfsplus_submit_bio(struct super_block *sb, sector_t sector,
buf = (u8 *)buf + len;
}
- ret = submit_bio_wait(rw, bio);
+ ret = submit_bio_wait(op, flags, bio);
out:
bio_put(bio);
return ret < 0 ? ret : 0;
@@ -181,7 +182,7 @@ int hfsplus_read_wrapper(struct super_block *sb)
reread:
error = hfsplus_submit_bio(sb, part_start + HFSPLUS_VOLHEAD_SECTOR,
sbi->s_vhdr_buf, (void **)&sbi->s_vhdr,
- READ);
+ REQ_OP_READ, 0);
if (error)
goto out_free_backup_vhdr;
@@ -213,7 +214,8 @@ reread:
error = hfsplus_submit_bio(sb, part_start + part_size - 2,
sbi->s_backup_vhdr_buf,
- (void **)&sbi->s_backup_vhdr, READ);
+ (void **)&sbi->s_backup_vhdr, REQ_OP_READ,
+ 0);
if (error)
goto out_free_backup_vhdr;
diff --git a/fs/logfs/dev_bdev.c b/fs/logfs/dev_bdev.c
index a7fdbd8..149c480 100644
--- a/fs/logfs/dev_bdev.c
+++ b/fs/logfs/dev_bdev.c
@@ -30,7 +30,7 @@ static int sync_request(struct page *page, struct block_device *bdev, int rw)
bio.bi_iter.bi_sector = page->index * (PAGE_SIZE >> 9);
bio.bi_iter.bi_size = PAGE_SIZE;
- return submit_bio_wait(rw, &bio);
+ return submit_bio_wait(rw, 0, &bio);
}
static int bdev_readpage(void *_sb, struct page *page)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index b9b6e04..7796e0b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -441,7 +441,7 @@ static inline void bio_io_error(struct bio *bio)
struct request_queue;
extern int bio_phys_segments(struct request_queue *, struct bio *);
-extern int submit_bio_wait(int rw, struct bio *bio);
+extern int submit_bio_wait(int op, int flags, struct bio *bio);
extern void bio_advance(struct bio *, unsigned);
extern void bio_init(struct bio *);
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index b2066fb..aa4ca997 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -250,7 +250,7 @@ static void hib_end_io(struct bio *bio)
bio_put(bio);
}
-static int hib_submit_io(int rw, pgoff_t page_off, void *addr,
+static int hib_submit_io(int rw, int flags, pgoff_t page_off, void *addr,
struct hib_bio_batch *hb)
{
struct page *page = virt_to_page(addr);
@@ -274,7 +274,7 @@ static int hib_submit_io(int rw, pgoff_t page_off, void *addr,
atomic_inc(&hb->count);
submit_bio(rw, bio);
} else {
- error = submit_bio_wait(rw, bio);
+ error = submit_bio_wait(rw, flags, bio);
bio_put(bio);
}
@@ -295,7 +295,8 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
{
int error;
- hib_submit_io(READ_SYNC, swsusp_resume_block, swsusp_header, NULL);
+ hib_submit_io(REQ_OP_READ, READ_SYNC, swsusp_resume_block,
+ swsusp_header, NULL);
if (!memcmp("SWAP-SPACE",swsusp_header->sig, 10) ||
!memcmp("SWAPSPACE2",swsusp_header->sig, 10)) {
memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10);
@@ -304,8 +305,8 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
swsusp_header->flags = flags;
if (flags & SF_CRC32_MODE)
swsusp_header->crc32 = handle->crc32;
- error = hib_submit_io(WRITE_SYNC, swsusp_resume_block,
- swsusp_header, NULL);
+ error = hib_submit_io(REQ_OP_WRITE, WRITE_SYNC,
+ swsusp_resume_block, swsusp_header, NULL);
} else {
printk(KERN_ERR "PM: Swap header not found!\n");
error = -ENODEV;
@@ -378,7 +379,7 @@ static int write_page(void *buf, sector_t offset, struct hib_bio_batch *hb)
} else {
src = buf;
}
- return hib_submit_io(WRITE_SYNC, offset, src, hb);
+ return hib_submit_io(REQ_OP_WRITE, WRITE_SYNC, offset, src, hb);
}
static void release_swap_writer(struct swap_map_handle *handle)
@@ -981,7 +982,8 @@ static int get_swap_reader(struct swap_map_handle *handle,
return -ENOMEM;
}
- error = hib_submit_io(READ_SYNC, offset, tmp->map, NULL);
+ error = hib_submit_io(REQ_OP_READ, READ_SYNC, offset,
+ tmp->map, NULL);
if (error) {
release_swap_reader(handle);
return error;
@@ -1005,7 +1007,7 @@ static int swap_read_page(struct swap_map_handle *handle, void *buf,
offset = handle->cur->entries[handle->k];
if (!offset)
return -EFAULT;
- error = hib_submit_io(READ_SYNC, offset, buf, hb);
+ error = hib_submit_io(REQ_OP_READ, READ_SYNC, offset, buf, hb);
if (error)
return error;
if (++handle->k >= MAP_PAGE_ENTRIES) {
@@ -1507,7 +1509,8 @@ int swsusp_check(void)
if (!IS_ERR(hib_resume_bdev)) {
set_blocksize(hib_resume_bdev, PAGE_SIZE);
clear_page(swsusp_header);
- error = hib_submit_io(READ_SYNC, swsusp_resume_block,
+ error = hib_submit_io(REQ_OP_READ, READ_SYNC,
+ swsusp_resume_block,
swsusp_header, NULL);
if (error)
goto put;
@@ -1515,7 +1518,8 @@ int swsusp_check(void)
if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
/* Reset swap signature now */
- error = hib_submit_io(WRITE_SYNC, swsusp_resume_block,
+ error = hib_submit_io(REQ_OP_WRITE, WRITE_SYNC,
+ swsusp_resume_block,
swsusp_header, NULL);
} else {
error = -EINVAL;
@@ -1559,10 +1563,12 @@ int swsusp_unmark(void)
{
int error;
- hib_submit_io(READ_SYNC, swsusp_resume_block, swsusp_header, NULL);
+ hib_submit_io(REQ_OP_READ, READ_SYNC, swsusp_resume_block,
+ swsusp_header, NULL);
if (!memcmp(HIBERNATE_SIG,swsusp_header->sig, 10)) {
memcpy(swsusp_header->sig,swsusp_header->orig_sig, 10);
- error = hib_submit_io(WRITE_SYNC, swsusp_resume_block,
+ error = hib_submit_io(REQ_OP_WRITE, WRITE_SYNC,
+ swsusp_resume_block,
swsusp_header, NULL);
} else {
printk(KERN_ERR "PM: Cannot find swsusp signature!\n");
--
1.8.3.1
From: Mike Christie <[email protected]>
Instead of passing around a bitmap of ops and flags, the
next patches separate it into a op field and a flags field.
This patch prepares the dio code and dio->submit_bio users
for the split.
Note that the next patches will fix up the submit_bio() call
with along other users of that function.
Signed-off-by: Mike Christie <[email protected]>
---
fs/btrfs/inode.c | 9 ++++-----
fs/direct-io.c | 34 +++++++++++++++++++++-------------
include/linux/fs.h | 4 ++--
3 files changed, 27 insertions(+), 20 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 611b66d..0ad8bab 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8196,14 +8196,13 @@ out_err:
return 0;
}
-static void btrfs_submit_direct(int rw, struct bio *dio_bio,
+static void btrfs_submit_direct(int op, int op_flags, struct bio *dio_bio,
struct inode *inode, loff_t file_offset)
{
struct btrfs_dio_private *dip = NULL;
struct bio *io_bio = NULL;
struct btrfs_io_bio *btrfs_bio;
int skip_sum;
- int write = rw & REQ_WRITE;
int ret = 0;
skip_sum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM;
@@ -8232,14 +8231,14 @@ static void btrfs_submit_direct(int rw, struct bio *dio_bio,
btrfs_bio = btrfs_io_bio(io_bio);
btrfs_bio->logical = file_offset;
- if (write) {
+ if (op == REQ_OP_WRITE) {
io_bio->bi_end_io = btrfs_endio_direct_write;
} else {
io_bio->bi_end_io = btrfs_endio_direct_read;
dip->subio_endio = btrfs_subio_endio_read;
}
- ret = btrfs_submit_direct_hook(rw, dip, skip_sum);
+ ret = btrfs_submit_direct_hook(op | op_flags, dip, skip_sum);
if (!ret)
return;
@@ -8267,7 +8266,7 @@ free_ordered:
dip = NULL;
io_bio = NULL;
} else {
- if (write) {
+ if (op == REQ_OP_WRITE) {
struct btrfs_ordered_extent *ordered;
ordered = btrfs_lookup_ordered_extent(inode,
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 1125629..5e1b1a0 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -108,7 +108,8 @@ struct dio_submit {
/* dio_state communicated between submission path and end_io */
struct dio {
int flags; /* doesn't change */
- int rw;
+ int op;
+ int op_flags;
struct inode *inode;
loff_t i_size; /* i_size when submitted */
dio_iodone_t *end_io; /* IO completion function */
@@ -160,7 +161,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio)
ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES,
&sdio->from);
- if (ret < 0 && sdio->blocks_available && (dio->rw & WRITE)) {
+ if (ret < 0 && sdio->blocks_available && (dio->op == REQ_OP_WRITE)) {
struct page *page = ZERO_PAGE(0);
/*
* A memory fault, but the filesystem has some outstanding
@@ -239,7 +240,8 @@ static ssize_t dio_complete(struct dio *dio, loff_t offset, ssize_t ret,
transferred = dio->result;
/* Check for short read case */
- if ((dio->rw == READ) && ((offset + transferred) > dio->i_size))
+ if ((dio->op == REQ_OP_READ) &&
+ ((offset + transferred) > dio->i_size))
transferred = dio->i_size - offset;
}
@@ -257,7 +259,7 @@ static ssize_t dio_complete(struct dio *dio, loff_t offset, ssize_t ret,
inode_dio_end(dio->inode);
if (is_async) {
- if (dio->rw & WRITE) {
+ if (dio->op == REQ_OP_WRITE) {
int err;
err = generic_write_sync(dio->iocb->ki_filp, offset,
@@ -393,14 +395,14 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
dio->refcount++;
spin_unlock_irqrestore(&dio->bio_lock, flags);
- if (dio->is_async && dio->rw == READ)
+ if (dio->is_async && dio->op == REQ_OP_READ)
bio_set_pages_dirty(bio);
if (sdio->submit_io)
- sdio->submit_io(dio->rw, bio, dio->inode,
+ sdio->submit_io(dio->op, dio->op_flags, bio, dio->inode,
sdio->logical_offset_in_bio);
else
- submit_bio(dio->rw, bio);
+ submit_bio(dio->op | dio->op_flags, bio);
sdio->bio = NULL;
sdio->boundary = 0;
@@ -464,14 +466,14 @@ static int dio_bio_complete(struct dio *dio, struct bio *bio)
if (bio->bi_error)
dio->io_error = -EIO;
- if (dio->is_async && dio->rw == READ) {
+ if (dio->is_async && dio->op == REQ_OP_READ) {
bio_check_pages_dirty(bio); /* transfers ownership */
err = bio->bi_error;
} else {
bio_for_each_segment_all(bvec, bio, i) {
struct page *page = bvec->bv_page;
- if (dio->rw == READ && !PageCompound(page))
+ if (dio->op == REQ_OP_READ && !PageCompound(page))
set_page_dirty_lock(page);
page_cache_release(page);
}
@@ -623,7 +625,7 @@ static int get_more_blocks(struct dio *dio, struct dio_submit *sdio,
* which may decide to handle it or also return an unmapped
* buffer head.
*/
- create = dio->rw & WRITE;
+ create = dio->op == REQ_OP_WRITE;
if (dio->flags & DIO_SKIP_HOLES) {
if (sdio->block_in_file < (i_size_read(dio->inode) >>
sdio->blkbits))
@@ -773,7 +775,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
{
int ret = 0;
- if (dio->rw & WRITE) {
+ if (dio->op == REQ_OP_WRITE) {
/*
* Read accounting is performed in submit_bio()
*/
@@ -973,7 +975,7 @@ do_holes:
loff_t i_size_aligned;
/* AKPM: eargh, -ENOTBLK is a hack */
- if (dio->rw & WRITE) {
+ if (dio->op == REQ_OP_WRITE) {
page_cache_release(page);
return -ENOTBLK;
}
@@ -1176,7 +1178,13 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
dio->is_async = true;
dio->inode = inode;
- dio->rw = iov_iter_rw(iter) == WRITE ? WRITE_ODIRECT : READ;
+ if (iov_iter_rw(iter) == WRITE) {
+ dio->op = REQ_OP_WRITE;
+ dio->op_flags = WRITE_ODIRECT;
+ } else {
+ dio->op = REQ_OP_READ;
+ dio->op_flags = 0;
+ }
/*
* For AIO O_(D)SYNC writes we need to defer completions to a workqueue
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 72d8a84..601b842 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2678,8 +2678,8 @@ extern int generic_file_open(struct inode * inode, struct file * filp);
extern int nonseekable_open(struct inode * inode, struct file * filp);
#ifdef CONFIG_BLOCK
-typedef void (dio_submit_t)(int rw, struct bio *bio, struct inode *inode,
- loff_t file_offset);
+typedef void (dio_submit_t)(int op, int op_flags, struct bio *bio,
+ struct inode *inode, loff_t file_offset);
enum {
/* need locking between buffered and direct access */
--
1.8.3.1
From: Mike Christie <[email protected]>
The next patches will prepare the submit_bio users
for the split. There were a lot more users than
there were for submit_bio_wait, so if the conversion
was not a one liner, I broke it out into its own
patch.
This patch prepares blkdev_issue_discard.
There is some compat code left which will be dropped
in later patches in the series.
1. REQ_WRITE is still being set. This is because a lot
of code assumes it will be set for discard, flushes and
write sames.
2. submit_bio is still taking a bitmap. This is to make
the series git bisectable.
Signed-off-by: Mike Christie <[email protected]>
---
block/blk-lib.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/block/blk-lib.c b/block/blk-lib.c
index 9ebf653..0861c7a 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -42,7 +42,8 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
{
DECLARE_COMPLETION_ONSTACK(wait);
struct request_queue *q = bdev_get_queue(bdev);
- int type = REQ_WRITE | REQ_DISCARD;
+ int op = REQ_OP_DISCARD;
+ int op_flags = REQ_WRITE;
unsigned int granularity;
int alignment;
struct bio_batch bb;
@@ -63,7 +64,7 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
if (flags & BLKDEV_DISCARD_SECURE) {
if (!blk_queue_secdiscard(q))
return -EOPNOTSUPP;
- type |= REQ_SECURE;
+ op_flags |= REQ_SECURE;
}
atomic_set(&bb.done, 1);
@@ -108,7 +109,7 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
sector = end_sect;
atomic_inc(&bb.done);
- submit_bio(type, bio);
+ submit_bio(op | op_flags, bio);
/*
* We can loop for a long time in here, if someone does
--
1.8.3.1
From: Mike Christie <[email protected]>
This patch prepares drbd's submit_bio use for the next
patches that split bi_rw into a operation and flags field.
Instead of passing in a bitmap with both the operation and
flags mixed in, the callers will now pass them in seperately.
This patch modifies the code related to the submit_bio calls
so the flags and operation are seperated. When this is done
for all code, one of the later patches in the series will
modify the actual submit_bio call, so the patches are bisectable.
Signed-off-by: Mike Christie <[email protected]>
---
drivers/block/drbd/drbd_actlog.c | 30 ++++++++++++++++--------------
drivers/block/drbd/drbd_bitmap.c | 4 ++--
drivers/block/drbd/drbd_int.h | 2 +-
drivers/block/drbd/drbd_main.c | 5 +++--
4 files changed, 22 insertions(+), 19 deletions(-)
diff --git a/drivers/block/drbd/drbd_actlog.c b/drivers/block/drbd/drbd_actlog.c
index b3868e7..c290e8b 100644
--- a/drivers/block/drbd/drbd_actlog.c
+++ b/drivers/block/drbd/drbd_actlog.c
@@ -137,19 +137,19 @@ void wait_until_done_or_force_detached(struct drbd_device *device, struct drbd_b
static int _drbd_md_sync_page_io(struct drbd_device *device,
struct drbd_backing_dev *bdev,
- sector_t sector, int rw)
+ sector_t sector, int op)
{
struct bio *bio;
/* we do all our meta data IO in aligned 4k blocks. */
const int size = 4096;
- int err;
+ int err, op_flags = 0;
device->md_io.done = 0;
device->md_io.error = -ENODEV;
- if ((rw & WRITE) && !test_bit(MD_NO_FUA, &device->flags))
- rw |= REQ_FUA | REQ_FLUSH;
- rw |= REQ_SYNC | REQ_NOIDLE;
+ if ((op == REQ_OP_WRITE) && !test_bit(MD_NO_FUA, &device->flags))
+ op_flags |= REQ_FUA | REQ_FLUSH;
+ op_flags |= REQ_SYNC | REQ_NOIDLE;
bio = bio_alloc_drbd(GFP_NOIO);
bio->bi_bdev = bdev->md_bdev;
@@ -159,9 +159,9 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
goto out;
bio->bi_private = device;
bio->bi_end_io = drbd_md_endio;
- bio->bi_rw = rw;
+ bio->bi_rw = op | op_flags;
- if (!(rw & WRITE) && device->state.disk == D_DISKLESS && device->ldev == NULL)
+ if (op != REQ_OP_WRITE && device->state.disk == D_DISKLESS && device->ldev == NULL)
/* special case, drbd_md_read() during drbd_adm_attach(): no get_ldev */
;
else if (!get_ldev_if_state(device, D_ATTACHING)) {
@@ -174,10 +174,10 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
bio_get(bio); /* one bio_put() is in the completion handler */
atomic_inc(&device->md_io.in_use); /* drbd_md_put_buffer() is in the completion handler */
device->md_io.submit_jif = jiffies;
- if (drbd_insert_fault(device, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD))
+ if (drbd_insert_fault(device, (op == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD))
bio_io_error(bio);
else
- submit_bio(rw, bio);
+ submit_bio(op | op_flags, bio);
wait_until_done_or_force_detached(device, bdev, &device->md_io.done);
if (!bio->bi_error)
err = device->md_io.error;
@@ -188,7 +188,7 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
}
int drbd_md_sync_page_io(struct drbd_device *device, struct drbd_backing_dev *bdev,
- sector_t sector, int rw)
+ sector_t sector, int op)
{
int err;
D_ASSERT(device, atomic_read(&device->md_io.in_use) == 1);
@@ -197,19 +197,21 @@ int drbd_md_sync_page_io(struct drbd_device *device, struct drbd_backing_dev *bd
dynamic_drbd_dbg(device, "meta_data io: %s [%d]:%s(,%llus,%s) %pS\n",
current->comm, current->pid, __func__,
- (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ",
+ (unsigned long long)sector, (op == REQ_OP_WRITE) ? "WRITE" : "READ",
(void*)_RET_IP_ );
if (sector < drbd_md_first_sector(bdev) ||
sector + 7 > drbd_md_last_sector(bdev))
drbd_alert(device, "%s [%d]:%s(,%llus,%s) out of range md access!\n",
current->comm, current->pid, __func__,
- (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ");
+ (unsigned long long)sector,
+ (op == REQ_OP_WRITE) ? "WRITE" : "READ");
- err = _drbd_md_sync_page_io(device, bdev, sector, rw);
+ err = _drbd_md_sync_page_io(device, bdev, sector, op);
if (err) {
drbd_err(device, "drbd_md_sync_page_io(,%llus,%s) failed with error %d\n",
- (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ", err);
+ (unsigned long long)sector,
+ (op == REQ_OP_WRITE) ? "WRITE" : "READ", err);
}
return err;
}
diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index e5e0f19..7ea1502 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -988,7 +988,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
struct drbd_bitmap *b = device->bitmap;
struct page *page;
unsigned int len;
- unsigned int rw = (ctx->flags & BM_AIO_READ) ? READ : WRITE;
+ unsigned int rw = (ctx->flags & BM_AIO_READ) ? REQ_OP_READ : REQ_OP_WRITE;
sector_t on_disk_sector =
device->ldev->md.md_offset + device->ldev->md.bm_offset;
@@ -1020,7 +1020,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
bio->bi_private = ctx;
bio->bi_end_io = drbd_bm_endio;
- if (drbd_insert_fault(device, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) {
+ if (drbd_insert_fault(device, (rw == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) {
bio->bi_rw |= rw;
bio_io_error(bio);
} else {
diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index 015c6e9..2934481 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -1494,7 +1494,7 @@ extern int drbd_resync_finished(struct drbd_device *device);
extern void *drbd_md_get_buffer(struct drbd_device *device, const char *intent);
extern void drbd_md_put_buffer(struct drbd_device *device);
extern int drbd_md_sync_page_io(struct drbd_device *device,
- struct drbd_backing_dev *bdev, sector_t sector, int rw);
+ struct drbd_backing_dev *bdev, sector_t sector, int op);
extern void drbd_ov_out_of_sync_found(struct drbd_device *, sector_t, int);
extern void wait_until_done_or_force_detached(struct drbd_device *device,
struct drbd_backing_dev *bdev, unsigned int *done);
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 74d97f4..9eb8039 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -3080,7 +3080,7 @@ void drbd_md_write(struct drbd_device *device, void *b)
D_ASSERT(device, drbd_md_ss(device->ldev) == device->ldev->md.md_offset);
sector = device->ldev->md.md_offset;
- if (drbd_md_sync_page_io(device, device->ldev, sector, WRITE)) {
+ if (drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE)) {
/* this was a try anyways ... */
drbd_err(device, "meta data update failed!\n");
drbd_chk_io_error(device, 1, DRBD_META_IO_ERROR);
@@ -3278,7 +3278,8 @@ int drbd_md_read(struct drbd_device *device, struct drbd_backing_dev *bdev)
bdev->md.meta_dev_idx = bdev->disk_conf->meta_dev_idx;
bdev->md.md_offset = drbd_md_ss(bdev);
- if (drbd_md_sync_page_io(device, bdev, bdev->md.md_offset, READ)) {
+ if (drbd_md_sync_page_io(device, bdev, bdev->md.md_offset,
+ REQ_OP_READ)) {
/* NOTE: can't do normal error processing here as this is
called BEFORE disk is attached */
drbd_err(device, "Error while reading metadata.\n");
--
1.8.3.1
From: Mike Christie <[email protected]>
This patch prepares xen blkback submit_bio use for the next
patches that split bi_rw into a operation and flags field.
Instead of passing in a bitmap with both the operation and
flags mixed in, the callers will now pass them in seperately.
This patch modifies the code related to the submit_bio calls
so the flags and operation are seperated. When this is done
for all code, one of the later patches in the series will
modify the actual submit_bio call, so the patches are bisectable.
Signed-off-by: Mike Christie <[email protected]>
---
drivers/block/xen-blkback/blkback.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6a685ae..bfffab3 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -488,7 +488,7 @@ static int xen_vbd_translate(struct phys_req *req, struct xen_blkif *blkif,
struct xen_vbd *vbd = &blkif->vbd;
int rc = -EACCES;
- if ((operation != READ) && vbd->readonly)
+ if ((operation != REQ_OP_READ) && vbd->readonly)
goto out;
if (likely(req->nr_sects)) {
@@ -990,7 +990,7 @@ static int dispatch_discard_io(struct xen_blkif *blkif,
preq.sector_number = req->u.discard.sector_number;
preq.nr_sects = req->u.discard.nr_sectors;
- err = xen_vbd_translate(&preq, blkif, WRITE);
+ err = xen_vbd_translate(&preq, blkif, REQ_OP_WRITE);
if (err) {
pr_warn("access denied: DISCARD [%llu->%llu] on dev=%04x\n",
preq.sector_number,
@@ -1203,6 +1203,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
struct bio **biolist = pending_req->biolist;
int i, nbio = 0;
int operation;
+ int operation_flags = 0;
struct blk_plug plug;
bool drain = false;
struct grant_page **pages = pending_req->segments;
@@ -1220,17 +1221,19 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
switch (req_operation) {
case BLKIF_OP_READ:
blkif->st_rd_req++;
- operation = READ;
+ operation = REQ_OP_READ;
break;
case BLKIF_OP_WRITE:
blkif->st_wr_req++;
- operation = WRITE_ODIRECT;
+ operation = REQ_OP_WRITE;
+ operation_flags = WRITE_ODIRECT;
break;
case BLKIF_OP_WRITE_BARRIER:
drain = true;
case BLKIF_OP_FLUSH_DISKCACHE:
blkif->st_f_req++;
- operation = WRITE_FLUSH;
+ operation = REQ_OP_WRITE;
+ operation_flags = WRITE_FLUSH;
break;
default:
operation = 0; /* make gcc happy */
@@ -1242,7 +1245,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
nseg = req->operation == BLKIF_OP_INDIRECT ?
req->u.indirect.nr_segments : req->u.rw.nr_segments;
- if (unlikely(nseg == 0 && operation != WRITE_FLUSH) ||
+ if (unlikely(nseg == 0 && operation_flags != WRITE_FLUSH) ||
unlikely((req->operation != BLKIF_OP_INDIRECT) &&
(nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) ||
unlikely((req->operation == BLKIF_OP_INDIRECT) &&
@@ -1349,7 +1352,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
/* This will be hit if the operation was a flush or discard. */
if (!bio) {
- BUG_ON(operation != WRITE_FLUSH);
+ BUG_ON(operation_flags != WRITE_FLUSH);
bio = bio_alloc(GFP_KERNEL, 0);
if (unlikely(bio == NULL))
@@ -1365,14 +1368,14 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
blk_start_plug(&plug);
for (i = 0; i < nbio; i++)
- submit_bio(operation, biolist[i]);
+ submit_bio(operation | operation_flags, biolist[i]);
/* Let the I/Os go.. */
blk_finish_plug(&plug);
- if (operation == READ)
+ if (operation == REQ_OP_READ)
blkif->st_rd_sect += preq.nr_sects;
- else if (operation & WRITE)
+ else if (operation == REQ_OP_WRITE)
blkif->st_wr_sect += preq.nr_sects;
return 0;
--
1.8.3.1
From: Mike Christie <[email protected]>
This patch prepares dm's submit_bio use for the next
patches that split bi_rw into a operation and flags field.
Instead of passing in a bitmap with both the operation and
flags mixed in, the callers will now pass them in seperately.
This patch modifies the code related to the submit_bio calls
so the flags and operation are seperated. When this is done
for all code, one of the later patches in the series will
the actual submit_bio call, so the patches are bisectable.
Signed-off-by: Mike Christie <[email protected]>
---
drivers/md/dm-bufio.c | 6 +++--
drivers/md/dm-io.c | 56 ++++++++++++++++++++++-------------------
drivers/md/dm-kcopyd.c | 3 ++-
drivers/md/dm-log.c | 5 ++--
drivers/md/dm-raid1.c | 11 +++++---
drivers/md/dm-snap-persistent.c | 24 ++++++++++--------
drivers/md/dm-thin.c | 6 ++---
include/linux/dm-io.h | 3 ++-
8 files changed, 64 insertions(+), 50 deletions(-)
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 83cc52e..9d5ef0c 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -554,7 +554,8 @@ static void use_dmio(struct dm_buffer *b, int rw, sector_t block,
{
int r;
struct dm_io_request io_req = {
- .bi_rw = rw,
+ .bi_op = rw,
+ .bi_op_flags = 0,
.notify.fn = dmio_complete,
.notify.context = b,
.client = b->c->dm_io,
@@ -1302,7 +1303,8 @@ EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers);
int dm_bufio_issue_flush(struct dm_bufio_client *c)
{
struct dm_io_request io_req = {
- .bi_rw = WRITE_FLUSH,
+ .bi_op = REQ_OP_WRITE,
+ .bi_op_flags = WRITE_FLUSH,
.mem.type = DM_IO_KMEM,
.mem.ptr.addr = NULL,
.client = c->dm_io,
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index 6f8e83b2..6479096 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -279,8 +279,9 @@ static void km_dp_init(struct dpages *dp, void *data)
/*-----------------------------------------------------------------
* IO routines that accept a list of pages.
*---------------------------------------------------------------*/
-static void do_region(int rw, unsigned region, struct dm_io_region *where,
- struct dpages *dp, struct io *io)
+static void do_region(int op, int op_flags, unsigned region,
+ struct dm_io_region *where, struct dpages *dp,
+ struct io *io)
{
struct bio *bio;
struct page *page;
@@ -296,24 +297,25 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where,
/*
* Reject unsupported discard and write same requests.
*/
- if (rw & REQ_DISCARD)
+ if (op == REQ_DISCARD)
special_cmd_max_sectors = q->limits.max_discard_sectors;
- else if (rw & REQ_WRITE_SAME)
+ else if (op == REQ_WRITE_SAME)
special_cmd_max_sectors = q->limits.max_write_same_sectors;
- if ((rw & (REQ_DISCARD | REQ_WRITE_SAME)) && special_cmd_max_sectors == 0) {
+ if ((op == REQ_DISCARD || op == REQ_WRITE_SAME) &&
+ special_cmd_max_sectors == 0) {
dec_count(io, region, -EOPNOTSUPP);
return;
}
/*
- * where->count may be zero if rw holds a flush and we need to
+ * where->count may be zero if op holds a flush and we need to
* send a zero-sized flush.
*/
do {
/*
* Allocate a suitably sized-bio.
*/
- if ((rw & REQ_DISCARD) || (rw & REQ_WRITE_SAME))
+ if ((op == REQ_DISCARD) || (op == REQ_WRITE_SAME))
num_bvecs = 1;
else
num_bvecs = min_t(int, BIO_MAX_PAGES,
@@ -325,11 +327,11 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where,
bio->bi_end_io = endio;
store_io_and_region_in_bio(bio, io, region);
- if (rw & REQ_DISCARD) {
+ if (op == REQ_DISCARD) {
num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining);
bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT;
remaining -= num_sectors;
- } else if (rw & REQ_WRITE_SAME) {
+ } else if (op == REQ_WRITE_SAME) {
/*
* WRITE SAME only uses a single page.
*/
@@ -356,11 +358,11 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where,
}
atomic_inc(&io->count);
- submit_bio(rw, bio);
+ submit_bio(op | op_flags, bio);
} while (remaining);
}
-static void dispatch_io(int rw, unsigned int num_regions,
+static void dispatch_io(int op, int op_flags, unsigned int num_regions,
struct dm_io_region *where, struct dpages *dp,
struct io *io, int sync)
{
@@ -370,7 +372,7 @@ static void dispatch_io(int rw, unsigned int num_regions,
BUG_ON(num_regions > DM_IO_MAX_REGIONS);
if (sync)
- rw |= REQ_SYNC;
+ op_flags |= REQ_SYNC;
/*
* For multiple regions we need to be careful to rewind
@@ -378,8 +380,8 @@ static void dispatch_io(int rw, unsigned int num_regions,
*/
for (i = 0; i < num_regions; i++) {
*dp = old_pages;
- if (where[i].count || (rw & REQ_FLUSH))
- do_region(rw, i, where + i, dp, io);
+ if (where[i].count || (op_flags & REQ_FLUSH))
+ do_region(op, op_flags, i, where + i, dp, io);
}
/*
@@ -403,13 +405,13 @@ static void sync_io_complete(unsigned long error, void *context)
}
static int sync_io(struct dm_io_client *client, unsigned int num_regions,
- struct dm_io_region *where, int rw, struct dpages *dp,
- unsigned long *error_bits)
+ struct dm_io_region *where, int op, int op_flags,
+ struct dpages *dp, unsigned long *error_bits)
{
struct io *io;
struct sync_io sio;
- if (num_regions > 1 && (rw & RW_MASK) != WRITE) {
+ if (num_regions > 1 && op != WRITE) {
WARN_ON(1);
return -EIO;
}
@@ -426,7 +428,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
io->vma_invalidate_address = dp->vma_invalidate_address;
io->vma_invalidate_size = dp->vma_invalidate_size;
- dispatch_io(rw, num_regions, where, dp, io, 1);
+ dispatch_io(op, op_flags, num_regions, where, dp, io, 1);
wait_for_completion_io(&sio.wait);
@@ -437,12 +439,12 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
}
static int async_io(struct dm_io_client *client, unsigned int num_regions,
- struct dm_io_region *where, int rw, struct dpages *dp,
- io_notify_fn fn, void *context)
+ struct dm_io_region *where, int op, int op_flags,
+ struct dpages *dp, io_notify_fn fn, void *context)
{
struct io *io;
- if (num_regions > 1 && (rw & RW_MASK) != WRITE) {
+ if (num_regions > 1 && op != WRITE) {
WARN_ON(1);
fn(1, context);
return -EIO;
@@ -458,7 +460,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions,
io->vma_invalidate_address = dp->vma_invalidate_address;
io->vma_invalidate_size = dp->vma_invalidate_size;
- dispatch_io(rw, num_regions, where, dp, io, 0);
+ dispatch_io(op, op_flags, num_regions, where, dp, io, 0);
return 0;
}
@@ -481,7 +483,7 @@ static int dp_init(struct dm_io_request *io_req, struct dpages *dp,
case DM_IO_VMA:
flush_kernel_vmap_range(io_req->mem.ptr.vma, size);
- if ((io_req->bi_rw & RW_MASK) == READ) {
+ if (io_req->bi_op == READ) {
dp->vma_invalidate_address = io_req->mem.ptr.vma;
dp->vma_invalidate_size = size;
}
@@ -519,10 +521,12 @@ int dm_io(struct dm_io_request *io_req, unsigned num_regions,
if (!io_req->notify.fn)
return sync_io(io_req->client, num_regions, where,
- io_req->bi_rw, &dp, sync_error_bits);
+ io_req->bi_op, io_req->bi_op_flags, &dp,
+ sync_error_bits);
- return async_io(io_req->client, num_regions, where, io_req->bi_rw,
- &dp, io_req->notify.fn, io_req->notify.context);
+ return async_io(io_req->client, num_regions, where, io_req->bi_op,
+ io_req->bi_op_flags, &dp, io_req->notify.fn,
+ io_req->notify.context);
}
EXPORT_SYMBOL(dm_io);
diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
index 3a7cade..02dccbe 100644
--- a/drivers/md/dm-kcopyd.c
+++ b/drivers/md/dm-kcopyd.c
@@ -496,7 +496,8 @@ static int run_io_job(struct kcopyd_job *job)
{
int r;
struct dm_io_request io_req = {
- .bi_rw = job->rw,
+ .bi_op = job->rw,
+ .bi_op_flags = 0,
.mem.type = DM_IO_PAGE_LIST,
.mem.ptr.pl = job->pages,
.mem.offset = 0,
diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
index 627d191..4ca2d1d 100644
--- a/drivers/md/dm-log.c
+++ b/drivers/md/dm-log.c
@@ -293,7 +293,7 @@ static void header_from_disk(struct log_header_core *core, struct log_header_dis
static int rw_header(struct log_c *lc, int rw)
{
- lc->io_req.bi_rw = rw;
+ lc->io_req.bi_op = rw;
return dm_io(&lc->io_req, 1, &lc->header_location, NULL);
}
@@ -306,7 +306,8 @@ static int flush_header(struct log_c *lc)
.count = 0,
};
- lc->io_req.bi_rw = WRITE_FLUSH;
+ lc->io_req.bi_op = REQ_OP_WRITE;
+ lc->io_req.bi_op_flags = WRITE_FLUSH;
return dm_io(&lc->io_req, 1, &null_location, NULL);
}
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index f2a363a..ec48487 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -260,7 +260,8 @@ static int mirror_flush(struct dm_target *ti)
struct dm_io_region io[ms->nr_mirrors];
struct mirror *m;
struct dm_io_request io_req = {
- .bi_rw = WRITE_FLUSH,
+ .bi_op = REQ_OP_WRITE,
+ .bi_op_flags = WRITE_FLUSH,
.mem.type = DM_IO_KMEM,
.mem.ptr.addr = NULL,
.client = ms->io_client,
@@ -541,7 +542,8 @@ static void read_async_bio(struct mirror *m, struct bio *bio)
{
struct dm_io_region io;
struct dm_io_request io_req = {
- .bi_rw = READ,
+ .bi_op = REQ_OP_READ,
+ .bi_op_flags = 0,
.mem.type = DM_IO_BIO,
.mem.ptr.bio = bio,
.notify.fn = read_callback,
@@ -654,7 +656,8 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
struct dm_io_region io[ms->nr_mirrors], *dest = io;
struct mirror *m;
struct dm_io_request io_req = {
- .bi_rw = WRITE | (bio->bi_rw & WRITE_FLUSH_FUA),
+ .bi_op = REQ_OP_WRITE,
+ .bi_op_flags = bio->bi_rw & WRITE_FLUSH_FUA,
.mem.type = DM_IO_BIO,
.mem.ptr.bio = bio,
.notify.fn = write_callback,
@@ -663,7 +666,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
};
if (bio->bi_rw & REQ_DISCARD) {
- io_req.bi_rw |= REQ_DISCARD;
+ io_req.bi_op_flags |= REQ_DISCARD;
io_req.mem.type = DM_IO_KMEM;
io_req.mem.ptr.addr = NULL;
}
diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index 117a05e..1032fcb 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -226,8 +226,8 @@ static void do_metadata(struct work_struct *work)
/*
* Read or write a chunk aligned and sized block of data from a device.
*/
-static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int rw,
- int metadata)
+static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int op,
+ int op_flags, int metadata)
{
struct dm_io_region where = {
.bdev = dm_snap_cow(ps->store->snap)->bdev,
@@ -235,7 +235,8 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int rw,
.count = ps->store->chunk_size,
};
struct dm_io_request io_req = {
- .bi_rw = rw,
+ .bi_op = op,
+ .bi_op_flags = op_flags,
.mem.type = DM_IO_VMA,
.mem.ptr.vma = area,
.client = ps->io_client,
@@ -281,14 +282,14 @@ static void skip_metadata(struct pstore *ps)
* Read or write a metadata area. Remembering to skip the first
* chunk which holds the header.
*/
-static int area_io(struct pstore *ps, int rw)
+static int area_io(struct pstore *ps, int op, int op_flags)
{
int r;
chunk_t chunk;
chunk = area_location(ps, ps->current_area);
- r = chunk_io(ps, ps->area, chunk, rw, 0);
+ r = chunk_io(ps, ps->area, chunk, op, op_flags, 0);
if (r)
return r;
@@ -302,7 +303,8 @@ static void zero_memory_area(struct pstore *ps)
static int zero_disk_area(struct pstore *ps, chunk_t area)
{
- return chunk_io(ps, ps->zero_area, area_location(ps, area), WRITE, 0);
+ return chunk_io(ps, ps->zero_area, area_location(ps, area), WRITE, 0,
+ 0);
}
static int read_header(struct pstore *ps, int *new_snapshot)
@@ -334,7 +336,7 @@ static int read_header(struct pstore *ps, int *new_snapshot)
if (r)
return r;
- r = chunk_io(ps, ps->header_area, 0, READ, 1);
+ r = chunk_io(ps, ps->header_area, 0, READ, 0, 1);
if (r)
goto bad;
@@ -395,7 +397,7 @@ static int write_header(struct pstore *ps)
dh->version = cpu_to_le32(ps->version);
dh->chunk_size = cpu_to_le32(ps->store->chunk_size);
- return chunk_io(ps, ps->header_area, 0, WRITE, 1);
+ return chunk_io(ps, ps->header_area, 0, WRITE, 0, 1);
}
/*
@@ -736,7 +738,7 @@ static void persistent_commit_exception(struct dm_exception_store *store,
/*
* Commit exceptions to disk.
*/
- if (ps->valid && area_io(ps, WRITE_FLUSH_FUA))
+ if (ps->valid && area_io(ps, REQ_OP_WRITE, WRITE_FLUSH_FUA))
ps->valid = 0;
/*
@@ -776,7 +778,7 @@ static int persistent_prepare_merge(struct dm_exception_store *store,
return 0;
ps->current_area--;
- r = area_io(ps, READ);
+ r = area_io(ps, REQ_OP_READ, 0);
if (r < 0)
return r;
ps->current_committed = ps->exceptions_per_area;
@@ -813,7 +815,7 @@ static int persistent_commit_merge(struct dm_exception_store *store,
for (i = 0; i < nr_merged; i++)
clear_exception(ps, ps->current_committed - 1 - i);
- r = area_io(ps, WRITE_FLUSH_FUA);
+ r = area_io(ps, REQ_OP_WRITE, WRITE_FLUSH_FUA);
if (r < 0)
return r;
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 3897b90..1500582 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -338,7 +338,7 @@ static int __blkdev_issue_discard_async(struct block_device *bdev, sector_t sect
struct bio *parent_bio)
{
struct request_queue *q = bdev_get_queue(bdev);
- int type = REQ_WRITE | REQ_DISCARD;
+ int op_flags = 0;
struct bio *bio;
if (!q || !nr_sects)
@@ -350,7 +350,7 @@ static int __blkdev_issue_discard_async(struct block_device *bdev, sector_t sect
if (flags & BLKDEV_DISCARD_SECURE) {
if (!blk_queue_secdiscard(q))
return -EOPNOTSUPP;
- type |= REQ_SECURE;
+ op_flags |= REQ_SECURE;
}
/*
@@ -366,7 +366,7 @@ static int __blkdev_issue_discard_async(struct block_device *bdev, sector_t sect
bio->bi_bdev = bdev;
bio->bi_iter.bi_size = nr_sects << 9;
- submit_bio(type, bio);
+ submit_bio(REQ_OP_DISCARD | op_flags, bio);
return 0;
}
diff --git a/include/linux/dm-io.h b/include/linux/dm-io.h
index a68cbe5..b91b023 100644
--- a/include/linux/dm-io.h
+++ b/include/linux/dm-io.h
@@ -57,7 +57,8 @@ struct dm_io_notify {
*/
struct dm_io_client;
struct dm_io_request {
- int bi_rw; /* READ|WRITE - not READA */
+ int bi_op; /* REQ_OP */
+ int bi_op_flags; /* rq_flag_bits */
struct dm_io_memory mem; /* Memory to use for io */
struct dm_io_notify notify; /* Synchronous if notify.fn is NULL */
struct dm_io_client *client; /* Client memory handler */
--
1.8.3.1
From: Mike Christie <[email protected]>
This patch prepares lio's submit_bio use for the next
patches that split bi_rw into a operation and flags field.
Instead of passing in a bitmap with both the operation and
flags mixed in, the callers will now pass them in seperately.
This patch modifies the code related to the submit_bio calls
so the flags and operation are seperated. When this is done
for all code, one of the later patches in the series will
the actual submit_bio call, so the patches are bisectable.
Signed-off-by: Mike Christie <[email protected]>
---
drivers/target/target_core_iblock.c | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 0f19e11..25f75ab 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -354,14 +354,14 @@ iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num)
return bio;
}
-static void iblock_submit_bios(struct bio_list *list, int rw)
+static void iblock_submit_bios(struct bio_list *list, int op, int op_flags)
{
struct blk_plug plug;
struct bio *bio;
blk_start_plug(&plug);
while ((bio = bio_list_pop(list)))
- submit_bio(rw, bio);
+ submit_bio(op | op_flags, bio);
blk_finish_plug(&plug);
}
@@ -480,7 +480,7 @@ iblock_execute_write_same(struct se_cmd *cmd)
sectors -= 1;
}
- iblock_submit_bios(&list, WRITE);
+ iblock_submit_bios(&list, REQ_OP_WRITE, 0);
return 0;
fail_put_bios:
@@ -653,7 +653,8 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
u32 sg_num = sgl_nents;
sector_t block_lba;
unsigned bio_cnt;
- int rw = 0;
+ int op_flags = 0;
+ int op = 0;
int i;
if (data_direction == DMA_TO_DEVICE) {
@@ -664,17 +665,20 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
* is not enabled, or if initiator set the Force Unit Access bit.
*/
if (q->flush_flags & REQ_FUA) {
- if (cmd->se_cmd_flags & SCF_FUA)
- rw = WRITE_FUA;
- else if (!(q->flush_flags & REQ_FLUSH))
- rw = WRITE_FUA;
- else
- rw = WRITE;
+ if (cmd->se_cmd_flags & SCF_FUA) {
+ op = REQ_OP_WRITE;
+ op_flags = WRITE_FUA;
+ } else if (!(q->flush_flags & REQ_FLUSH)) {
+ op = REQ_OP_WRITE;
+ op_flags = WRITE_FUA;
+ } else {
+ op = REQ_OP_WRITE;
+ }
} else {
- rw = WRITE;
+ op = REQ_OP_WRITE;
}
} else {
- rw = READ;
+ op = REQ_OP_READ;
}
/*
@@ -726,7 +730,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
!= sg->length) {
if (bio_cnt >= IBLOCK_MAX_BIO_PER_TASK) {
- iblock_submit_bios(&list, rw);
+ iblock_submit_bios(&list, op, op_flags);
bio_cnt = 0;
}
@@ -750,7 +754,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
goto fail_put_bios;
}
- iblock_submit_bios(&list, rw);
+ iblock_submit_bios(&list, op, op_flags);
iblock_complete_cmd(cmd);
return 0;
--
1.8.3.1
From: Mike Christie <[email protected]>
This patch prepares btrfs's submit_bio use for the next
patches that split bi_rw into a operation and flags field.
Instead of passing in a bitmap with both the operation and
flags mixed in, the callers will now pass them in seperately.
This patch modifies the code related to the submit_bio calls
so the flags and operation are seperated. When this is done
for all code, one of the later patches in the series will
the actual submit_bio call, so the patches are bisectable.
Signed-off-by: Mike Christie <[email protected]>
---
fs/btrfs/check-integrity.c | 6 +--
fs/btrfs/check-integrity.h | 2 +-
fs/btrfs/compression.c | 8 ++--
fs/btrfs/ctree.h | 3 +-
fs/btrfs/disk-io.c | 48 +++++++++++----------
fs/btrfs/disk-io.h | 2 +-
fs/btrfs/extent-tree.c | 2 +-
fs/btrfs/extent_io.c | 103 ++++++++++++++++++++++++---------------------
fs/btrfs/extent_io.h | 7 +--
fs/btrfs/inode.c | 65 ++++++++++++++--------------
fs/btrfs/scrub.c | 4 +-
fs/btrfs/volumes.c | 88 ++++++++++++++++++++------------------
fs/btrfs/volumes.h | 4 +-
13 files changed, 177 insertions(+), 165 deletions(-)
diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index fd50b2f..515a92e 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -3058,10 +3058,10 @@ leave:
mutex_unlock(&btrfsic_mutex);
}
-void btrfsic_submit_bio(int rw, struct bio *bio)
+void btrfsic_submit_bio(int op, int op_flags, struct bio *bio)
{
- __btrfsic_submit_bio(rw, bio);
- submit_bio(rw, bio);
+ __btrfsic_submit_bio(op | op_flags, bio);
+ submit_bio(op | op_flags, bio);
}
int btrfsic_submit_bio_wait(int op, int op_flags, struct bio *bio)
diff --git a/fs/btrfs/check-integrity.h b/fs/btrfs/check-integrity.h
index 13b0d54..a8edc424 100644
--- a/fs/btrfs/check-integrity.h
+++ b/fs/btrfs/check-integrity.h
@@ -21,7 +21,7 @@
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
int btrfsic_submit_bh(int rw, struct buffer_head *bh);
-void btrfsic_submit_bio(int rw, struct bio *bio);
+void btrfsic_submit_bio(int op, int op_flags, struct bio *bio);
int btrfsic_submit_bio_wait(int op, int op_flags, struct bio *bio);
#else
#define btrfsic_submit_bh submit_bh
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 57ee8ca..a7b245d 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -401,7 +401,7 @@ int btrfs_submit_compressed_write(struct inode *inode, u64 start,
BUG_ON(ret); /* -ENOMEM */
}
- ret = btrfs_map_bio(root, WRITE, bio, 0, 1);
+ ret = btrfs_map_bio(root, REQ_OP_WRITE, 0, bio, 0, 1);
BUG_ON(ret); /* -ENOMEM */
bio_put(bio);
@@ -431,7 +431,7 @@ int btrfs_submit_compressed_write(struct inode *inode, u64 start,
BUG_ON(ret); /* -ENOMEM */
}
- ret = btrfs_map_bio(root, WRITE, bio, 0, 1);
+ ret = btrfs_map_bio(root, REQ_OP_WRITE, 0, bio, 0, 1);
BUG_ON(ret); /* -ENOMEM */
bio_put(bio);
@@ -692,7 +692,7 @@ int btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
sums += DIV_ROUND_UP(comp_bio->bi_iter.bi_size,
root->sectorsize);
- ret = btrfs_map_bio(root, READ, comp_bio,
+ ret = btrfs_map_bio(root, REQ_OP_READ, 0, comp_bio,
mirror_num, 0);
if (ret) {
bio->bi_error = ret;
@@ -722,7 +722,7 @@ int btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
BUG_ON(ret); /* -ENOMEM */
}
- ret = btrfs_map_bio(root, READ, comp_bio, mirror_num, 0);
+ ret = btrfs_map_bio(root, REQ_OP_READ, 0, comp_bio, mirror_num, 0);
if (ret) {
bio->bi_error = ret;
bio_endio(comp_bio);
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 938efe3..e4489dd1 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3910,8 +3910,7 @@ int btrfs_create_subvol_root(struct btrfs_trans_handle *trans,
struct btrfs_root *parent_root,
u64 new_dirid);
int btrfs_merge_bio_hook(int rw, struct page *page, unsigned long offset,
- size_t size, struct bio *bio,
- unsigned long bio_flags);
+ size_t size, struct bio *bio, unsigned long bio_flags);
int btrfs_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf);
int btrfs_readpage(struct file *file, struct page *page);
void btrfs_evict_inode(struct inode *inode);
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 1e60d00..6c17d5d 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -118,7 +118,8 @@ struct async_submit_bio {
struct list_head list;
extent_submit_bio_hook_t *submit_bio_start;
extent_submit_bio_hook_t *submit_bio_done;
- int rw;
+ int op;
+ int op_flags;
int mirror_num;
unsigned long bio_flags;
/*
@@ -783,9 +784,9 @@ static void run_one_async_start(struct btrfs_work *work)
int ret;
async = container_of(work, struct async_submit_bio, work);
- ret = async->submit_bio_start(async->inode, async->rw, async->bio,
- async->mirror_num, async->bio_flags,
- async->bio_offset);
+ ret = async->submit_bio_start(async->inode, async->op, async->op_flags,
+ async->bio, async->mirror_num,
+ async->bio_flags, async->bio_offset);
if (ret)
async->error = ret;
}
@@ -813,8 +814,8 @@ static void run_one_async_done(struct btrfs_work *work)
return;
}
- async->submit_bio_done(async->inode, async->rw, async->bio,
- async->mirror_num, async->bio_flags,
+ async->submit_bio_done(async->inode, async->op, async->op_flags,
+ async->bio, async->mirror_num, async->bio_flags,
async->bio_offset);
}
@@ -827,7 +828,7 @@ static void run_one_async_free(struct btrfs_work *work)
}
int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode,
- int rw, struct bio *bio, int mirror_num,
+ int op, int op_flags, struct bio *bio, int mirror_num,
unsigned long bio_flags,
u64 bio_offset,
extent_submit_bio_hook_t *submit_bio_start,
@@ -840,7 +841,8 @@ int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode,
return -ENOMEM;
async->inode = inode;
- async->rw = rw;
+ async->op = op;
+ async->op_flags = op_flags;
async->bio = bio;
async->mirror_num = mirror_num;
async->submit_bio_start = submit_bio_start;
@@ -856,7 +858,7 @@ int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode,
atomic_inc(&fs_info->nr_async_submits);
- if (rw & REQ_SYNC)
+ if (op_flags & REQ_SYNC)
btrfs_set_work_high_priority(&async->work);
btrfs_queue_work(fs_info->workers, &async->work);
@@ -886,7 +888,7 @@ static int btree_csum_one_bio(struct bio *bio)
return ret;
}
-static int __btree_submit_bio_start(struct inode *inode, int rw,
+static int __btree_submit_bio_start(struct inode *inode, int op, int op_flags,
struct bio *bio, int mirror_num,
unsigned long bio_flags,
u64 bio_offset)
@@ -898,9 +900,9 @@ static int __btree_submit_bio_start(struct inode *inode, int rw,
return btree_csum_one_bio(bio);
}
-static int __btree_submit_bio_done(struct inode *inode, int rw, struct bio *bio,
- int mirror_num, unsigned long bio_flags,
- u64 bio_offset)
+static int __btree_submit_bio_done(struct inode *inode, int op, int op_flags,
+ struct bio *bio, int mirror_num,
+ unsigned long bio_flags, u64 bio_offset)
{
int ret;
@@ -908,7 +910,7 @@ static int __btree_submit_bio_done(struct inode *inode, int rw, struct bio *bio,
* when we're called for a write, we're already in the async
* submission context. Just jump into btrfs_map_bio
*/
- ret = btrfs_map_bio(BTRFS_I(inode)->root, rw, bio, mirror_num, 1);
+ ret = btrfs_map_bio(BTRFS_I(inode)->root, op, op_flags, bio, mirror_num, 1);
if (ret) {
bio->bi_error = ret;
bio_endio(bio);
@@ -927,14 +929,14 @@ static int check_async_write(struct inode *inode, unsigned long bio_flags)
return 1;
}
-static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
- int mirror_num, unsigned long bio_flags,
- u64 bio_offset)
+static int btree_submit_bio_hook(struct inode *inode, int op, int op_flags,
+ struct bio *bio, int mirror_num,
+ unsigned long bio_flags, u64 bio_offset)
{
int async = check_async_write(inode, bio_flags);
int ret;
- if (!(rw & REQ_WRITE)) {
+ if (!(op == REQ_OP_WRITE)) {
/*
* called for a read, do the setup so that checksum validation
* can happen in the async kernel threads
@@ -943,13 +945,13 @@ static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
bio, BTRFS_WQ_ENDIO_METADATA);
if (ret)
goto out_w_error;
- ret = btrfs_map_bio(BTRFS_I(inode)->root, rw, bio,
+ ret = btrfs_map_bio(BTRFS_I(inode)->root, op, op_flags, bio,
mirror_num, 0);
} else if (!async) {
ret = btree_csum_one_bio(bio);
if (ret)
goto out_w_error;
- ret = btrfs_map_bio(BTRFS_I(inode)->root, rw, bio,
+ ret = btrfs_map_bio(BTRFS_I(inode)->root, op, op_flags, bio,
mirror_num, 0);
} else {
/*
@@ -957,8 +959,8 @@ static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
* checksumming can happen in parallel across all CPUs
*/
ret = btrfs_wq_submit_bio(BTRFS_I(inode)->root->fs_info,
- inode, rw, bio, mirror_num, 0,
- bio_offset,
+ inode, op, op_flags, bio, mirror_num,
+ 0, bio_offset,
__btree_submit_bio_start,
__btree_submit_bio_done);
}
@@ -3392,7 +3394,7 @@ static int write_dev_flush(struct btrfs_device *device, int wait)
device->flush_bio = bio;
bio_get(bio);
- btrfsic_submit_bio(WRITE_FLUSH, bio);
+ btrfsic_submit_bio(REQ_OP_WRITE, WRITE_FLUSH, bio);
return 0;
}
diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
index bdfb479..caa47a3 100644
--- a/fs/btrfs/disk-io.h
+++ b/fs/btrfs/disk-io.h
@@ -121,7 +121,7 @@ void btrfs_csum_final(u32 crc, char *result);
int btrfs_bio_wq_end_io(struct btrfs_fs_info *info, struct bio *bio,
enum btrfs_wq_endio_type metadata);
int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode,
- int rw, struct bio *bio, int mirror_num,
+ int op, int op_flags, struct bio *bio, int mirror_num,
unsigned long bio_flags, u64 bio_offset,
extent_submit_bio_hook_t *submit_bio_start,
extent_submit_bio_hook_t *submit_bio_done);
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 601d7d4..d28d4b2 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -1964,7 +1964,7 @@ int btrfs_discard_extent(struct btrfs_root *root, u64 bytenr,
/* Tell the block device(s) that the sectors can be discarded */
- ret = btrfs_map_block(root->fs_info, REQ_DISCARD,
+ ret = btrfs_map_block(root->fs_info, REQ_OP_DISCARD,
bytenr, &num_bytes, &bbio, 0);
/* Error condition is -ENOMEM */
if (!ret) {
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 9a68a83..0dc9ec6 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2407,7 +2407,7 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
struct inode *inode = page->mapping->host;
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
struct bio *bio;
- int read_mode;
+ int read_mode_flags = READ_SYNC;
int ret;
BUG_ON(failed_bio->bi_rw & REQ_WRITE);
@@ -2423,9 +2423,7 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
}
if (failed_bio->bi_vcnt > 1)
- read_mode = READ_SYNC | REQ_FAILFAST_DEV;
- else
- read_mode = READ_SYNC;
+ read_mode_flags |= REQ_FAILFAST_DEV;
phy_offset >>= inode->i_sb->s_blocksize_bits;
bio = btrfs_create_repair_bio(inode, failed_bio, failrec, page,
@@ -2438,10 +2436,10 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset,
}
pr_debug("Repair Read Error: submitting new read[%#x] to this_mirror=%d, in_validation=%d\n",
- read_mode, failrec->this_mirror, failrec->in_validation);
+ read_mode_flags, failrec->this_mirror, failrec->in_validation);
- ret = tree->ops->submit_bio_hook(inode, read_mode, bio,
- failrec->this_mirror,
+ ret = tree->ops->submit_bio_hook(inode, REQ_OP_READ, read_mode_flags,
+ bio, failrec->this_mirror,
failrec->bio_flags, 0);
if (ret) {
free_io_failure(inode, failrec);
@@ -2750,7 +2748,7 @@ struct bio *btrfs_io_bio_alloc(gfp_t gfp_mask, unsigned int nr_iovecs)
}
-static int __must_check submit_one_bio(int rw, struct bio *bio,
+static int __must_check submit_one_bio(int op, int op_flags, struct bio *bio,
int mirror_num, unsigned long bio_flags)
{
int ret = 0;
@@ -2766,18 +2764,19 @@ static int __must_check submit_one_bio(int rw, struct bio *bio,
bio_get(bio);
if (tree->ops && tree->ops->submit_bio_hook)
- ret = tree->ops->submit_bio_hook(page->mapping->host, rw, bio,
- mirror_num, bio_flags, start);
+ ret = tree->ops->submit_bio_hook(page->mapping->host, op,
+ op_flags, bio, mirror_num,
+ bio_flags, start);
else
- btrfsic_submit_bio(rw, bio);
+ btrfsic_submit_bio(op, op_flags, bio);
bio_put(bio);
return ret;
}
-static int merge_bio(int rw, struct extent_io_tree *tree, struct page *page,
- unsigned long offset, size_t size, struct bio *bio,
- unsigned long bio_flags)
+static int merge_bio(int rw, struct extent_io_tree *tree,
+ struct page *page, unsigned long offset, size_t size,
+ struct bio *bio, unsigned long bio_flags)
{
int ret = 0;
if (tree->ops && tree->ops->merge_bio_hook)
@@ -2788,7 +2787,7 @@ static int merge_bio(int rw, struct extent_io_tree *tree, struct page *page,
}
-static int submit_extent_page(int rw, struct extent_io_tree *tree,
+static int submit_extent_page(int op, int op_flags, struct extent_io_tree *tree,
struct writeback_control *wbc,
struct page *page, sector_t sector,
size_t size, unsigned long offset,
@@ -2816,9 +2815,10 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
if (prev_bio_flags != bio_flags || !contig ||
force_bio_submit ||
- merge_bio(rw, tree, page, offset, page_size, bio, bio_flags) ||
+ merge_bio(op, tree, page, offset, page_size, bio,
+ bio_flags) ||
bio_add_page(bio, page, page_size, offset) < page_size) {
- ret = submit_one_bio(rw, bio, mirror_num,
+ ret = submit_one_bio(op, op_flags, bio, mirror_num,
prev_bio_flags);
if (ret < 0) {
*bio_ret = NULL;
@@ -2848,7 +2848,7 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
if (bio_ret)
*bio_ret = bio;
else
- ret = submit_one_bio(rw, bio, mirror_num, bio_flags);
+ ret = submit_one_bio(op, op_flags, bio, mirror_num, bio_flags);
return ret;
}
@@ -2912,7 +2912,7 @@ static int __do_readpage(struct extent_io_tree *tree,
get_extent_t *get_extent,
struct extent_map **em_cached,
struct bio **bio, int mirror_num,
- unsigned long *bio_flags, int rw,
+ unsigned long *bio_flags, int op, int op_flags,
u64 *prev_em_start)
{
struct inode *inode = page->mapping->host;
@@ -3099,7 +3099,7 @@ static int __do_readpage(struct extent_io_tree *tree,
}
pnr -= page->index;
- ret = submit_extent_page(rw, tree, NULL, page,
+ ret = submit_extent_page(op, op_flags, tree, NULL, page,
sector, disk_io_size, pg_offset,
bdev, bio, pnr,
end_bio_extent_readpage, mirror_num,
@@ -3132,7 +3132,7 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
get_extent_t *get_extent,
struct extent_map **em_cached,
struct bio **bio, int mirror_num,
- unsigned long *bio_flags, int rw,
+ unsigned long *bio_flags,
u64 *prev_em_start)
{
struct inode *inode;
@@ -3153,7 +3153,8 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
for (index = 0; index < nr_pages; index++) {
__do_readpage(tree, pages[index], get_extent, em_cached, bio,
- mirror_num, bio_flags, rw, prev_em_start);
+ mirror_num, bio_flags, REQ_OP_READ, 0,
+ prev_em_start);
page_cache_release(pages[index]);
}
}
@@ -3163,7 +3164,7 @@ static void __extent_readpages(struct extent_io_tree *tree,
int nr_pages, get_extent_t *get_extent,
struct extent_map **em_cached,
struct bio **bio, int mirror_num,
- unsigned long *bio_flags, int rw,
+ unsigned long *bio_flags,
u64 *prev_em_start)
{
u64 start = 0;
@@ -3185,7 +3186,7 @@ static void __extent_readpages(struct extent_io_tree *tree,
index - first_index, start,
end, get_extent, em_cached,
bio, mirror_num, bio_flags,
- rw, prev_em_start);
+ prev_em_start);
start = page_start;
end = start + PAGE_CACHE_SIZE - 1;
first_index = index;
@@ -3196,7 +3197,7 @@ static void __extent_readpages(struct extent_io_tree *tree,
__do_contiguous_readpages(tree, &pages[first_index],
index - first_index, start,
end, get_extent, em_cached, bio,
- mirror_num, bio_flags, rw,
+ mirror_num, bio_flags,
prev_em_start);
}
@@ -3204,7 +3205,8 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
struct page *page,
get_extent_t *get_extent,
struct bio **bio, int mirror_num,
- unsigned long *bio_flags, int rw)
+ unsigned long *bio_flags, int op,
+ int op_flags)
{
struct inode *inode = page->mapping->host;
struct btrfs_ordered_extent *ordered;
@@ -3223,7 +3225,7 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
}
ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
- bio_flags, rw, NULL);
+ bio_flags, op, op_flags, NULL);
return ret;
}
@@ -3235,9 +3237,10 @@ int extent_read_full_page(struct extent_io_tree *tree, struct page *page,
int ret;
ret = __extent_read_full_page(tree, page, get_extent, &bio, mirror_num,
- &bio_flags, READ);
+ &bio_flags, REQ_OP_READ, 0);
if (bio)
- ret = submit_one_bio(READ, bio, mirror_num, bio_flags);
+ ret = submit_one_bio(REQ_OP_READ, 0, bio, mirror_num,
+ bio_flags);
return ret;
}
@@ -3249,9 +3252,10 @@ int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page,
int ret;
ret = __do_readpage(tree, page, get_extent, NULL, &bio, mirror_num,
- &bio_flags, READ, NULL);
+ &bio_flags, REQ_OP_READ, 0, NULL);
if (bio)
- ret = submit_one_bio(READ, bio, mirror_num, bio_flags);
+ ret = submit_one_bio(REQ_OP_READ, 0, bio, mirror_num,
+ bio_flags);
return ret;
}
@@ -3498,9 +3502,10 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode,
page->index, cur, end);
}
- ret = submit_extent_page(write_flags, tree, wbc, page,
- sector, iosize, pg_offset,
- bdev, &epd->bio, max_nr,
+ ret = submit_extent_page(REQ_OP_WRITE, write_flags,
+ tree, wbc, page, sector,
+ iosize, pg_offset, bdev,
+ &epd->bio, max_nr,
end_bio_extent_writepage,
0, 0, 0, false);
if (ret)
@@ -3538,13 +3543,11 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc,
size_t pg_offset = 0;
loff_t i_size = i_size_read(inode);
unsigned long end_index = i_size >> PAGE_CACHE_SHIFT;
- int write_flags;
+ int write_flags = 0;
unsigned long nr_written = 0;
if (wbc->sync_mode == WB_SYNC_ALL)
write_flags = WRITE_SYNC;
- else
- write_flags = WRITE;
trace___extent_writepage(page, inode, wbc);
@@ -3788,7 +3791,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
u64 offset = eb->start;
unsigned long i, num_pages;
unsigned long bio_flags = 0;
- int rw = (epd->sync_io ? WRITE_SYNC : WRITE) | REQ_META;
+ int op_flags = (epd->sync_io ? WRITE_SYNC : 0) | REQ_META;
int ret = 0;
clear_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags);
@@ -3802,9 +3805,10 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
clear_page_dirty_for_io(p);
set_page_writeback(p);
- ret = submit_extent_page(rw, tree, wbc, p, offset >> 9,
- PAGE_CACHE_SIZE, 0, bdev, &epd->bio,
- -1, end_bio_extent_buffer_writepage,
+ ret = submit_extent_page(REQ_OP_WRITE, op_flags, tree, wbc, p,
+ offset >> 9, PAGE_CACHE_SIZE, 0, bdev,
+ &epd->bio, -1,
+ end_bio_extent_buffer_writepage,
0, epd->bio_flags, bio_flags, false);
epd->bio_flags = bio_flags;
if (ret) {
@@ -4093,13 +4097,14 @@ retry:
static void flush_epd_write_bio(struct extent_page_data *epd)
{
if (epd->bio) {
- int rw = WRITE;
+ int op_flags = 0;
int ret;
if (epd->sync_io)
- rw = WRITE_SYNC;
+ op_flags = WRITE_SYNC;
- ret = submit_one_bio(rw, epd->bio, 0, epd->bio_flags);
+ ret = submit_one_bio(REQ_OP_WRITE, op_flags, epd->bio, 0,
+ epd->bio_flags);
BUG_ON(ret < 0); /* -ENOMEM */
epd->bio = NULL;
}
@@ -4226,19 +4231,19 @@ int extent_readpages(struct extent_io_tree *tree,
if (nr < ARRAY_SIZE(pagepool))
continue;
__extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
- &bio, 0, &bio_flags, READ, &prev_em_start);
+ &bio, 0, &bio_flags, &prev_em_start);
nr = 0;
}
if (nr)
__extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
- &bio, 0, &bio_flags, READ, &prev_em_start);
+ &bio, 0, &bio_flags, &prev_em_start);
if (em_cached)
free_extent_map(em_cached);
BUG_ON(!list_empty(pages));
if (bio)
- return submit_one_bio(READ, bio, 0, bio_flags);
+ return submit_one_bio(REQ_OP_READ, 0, bio, 0, bio_flags);
return 0;
}
@@ -5254,7 +5259,7 @@ int read_extent_buffer_pages(struct extent_io_tree *tree,
err = __extent_read_full_page(tree, page,
get_extent, &bio,
mirror_num, &bio_flags,
- READ | REQ_META);
+ REQ_OP_READ, REQ_META);
if (err)
ret = err;
} else {
@@ -5263,7 +5268,7 @@ int read_extent_buffer_pages(struct extent_io_tree *tree,
}
if (bio) {
- err = submit_one_bio(READ | REQ_META, bio, mirror_num,
+ err = submit_one_bio(REQ_OP_READ, REQ_META, bio, mirror_num,
bio_flags);
if (err)
return err;
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index c668f36..28d8929 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -61,9 +61,10 @@ struct extent_state;
struct btrfs_root;
struct btrfs_io_bio;
-typedef int (extent_submit_bio_hook_t)(struct inode *inode, int rw,
- struct bio *bio, int mirror_num,
- unsigned long bio_flags, u64 bio_offset);
+typedef int (extent_submit_bio_hook_t)(struct inode *inode, int op,
+ int op_flags, struct bio *bio,
+ int mirror_num, unsigned long bio_flags,
+ u64 bio_offset);
struct extent_io_ops {
int (*fill_delalloc)(struct inode *inode, struct page *locked_page,
u64 start, u64 end, int *page_started,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0ad8bab..dd0b769 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1785,8 +1785,7 @@ static void btrfs_clear_bit_hook(struct inode *inode,
* we don't create bios that span stripes or chunks
*/
int btrfs_merge_bio_hook(int rw, struct page *page, unsigned long offset,
- size_t size, struct bio *bio,
- unsigned long bio_flags)
+ size_t size, struct bio *bio, unsigned long bio_flags)
{
struct btrfs_root *root = BTRFS_I(page->mapping->host)->root;
u64 logical = (u64)bio->bi_iter.bi_sector << 9;
@@ -1816,7 +1815,7 @@ int btrfs_merge_bio_hook(int rw, struct page *page, unsigned long offset,
* At IO completion time the cums attached on the ordered extent record
* are inserted into the btree
*/
-static int __btrfs_submit_bio_start(struct inode *inode, int rw,
+static int __btrfs_submit_bio_start(struct inode *inode, int op, int op_flags,
struct bio *bio, int mirror_num,
unsigned long bio_flags,
u64 bio_offset)
@@ -1837,14 +1836,14 @@ static int __btrfs_submit_bio_start(struct inode *inode, int rw,
* At IO completion time the cums attached on the ordered extent record
* are inserted into the btree
*/
-static int __btrfs_submit_bio_done(struct inode *inode, int rw, struct bio *bio,
- int mirror_num, unsigned long bio_flags,
- u64 bio_offset)
+static int __btrfs_submit_bio_done(struct inode *inode, int op, int op_flags,
+ struct bio *bio, int mirror_num,
+ unsigned long bio_flags, u64 bio_offset)
{
struct btrfs_root *root = BTRFS_I(inode)->root;
int ret;
- ret = btrfs_map_bio(root, rw, bio, mirror_num, 1);
+ ret = btrfs_map_bio(root, op, op_flags, bio, mirror_num, 1);
if (ret) {
bio->bi_error = ret;
bio_endio(bio);
@@ -1856,9 +1855,9 @@ static int __btrfs_submit_bio_done(struct inode *inode, int rw, struct bio *bio,
* extent_io.c submission hook. This does the right thing for csum calculation
* on write, or reading the csums from the tree before a read
*/
-static int btrfs_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
- int mirror_num, unsigned long bio_flags,
- u64 bio_offset)
+static int btrfs_submit_bio_hook(struct inode *inode, int op, int op_flags,
+ struct bio *bio, int mirror_num,
+ unsigned long bio_flags, u64 bio_offset)
{
struct btrfs_root *root = BTRFS_I(inode)->root;
int ret = 0;
@@ -1871,7 +1870,7 @@ static int btrfs_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
if (btrfs_is_free_space_inode(inode))
metadata = 2;
- if (!(rw & REQ_WRITE)) {
+ if (!(op == REQ_OP_WRITE)) {
ret = btrfs_bio_wq_end_io(root->fs_info, bio, metadata);
if (ret)
goto out;
@@ -1893,7 +1892,7 @@ static int btrfs_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
goto mapit;
/* we're doing a write, do the async checksumming */
ret = btrfs_wq_submit_bio(BTRFS_I(inode)->root->fs_info,
- inode, rw, bio, mirror_num,
+ inode, op, op_flags, bio, mirror_num,
bio_flags, bio_offset,
__btrfs_submit_bio_start,
__btrfs_submit_bio_done);
@@ -1905,7 +1904,7 @@ static int btrfs_submit_bio_hook(struct inode *inode, int rw, struct bio *bio,
}
mapit:
- ret = btrfs_map_bio(root, rw, bio, mirror_num, 0);
+ ret = btrfs_map_bio(root, op, op_flags, bio, mirror_num, 0);
out:
if (ret < 0) {
@@ -7613,13 +7612,11 @@ unlock_err:
}
static inline int submit_dio_repair_bio(struct inode *inode, struct bio *bio,
- int rw, int mirror_num)
+ int op_flags, int mirror_num)
{
struct btrfs_root *root = BTRFS_I(inode)->root;
int ret;
- BUG_ON(rw & REQ_WRITE);
-
bio_get(bio);
ret = btrfs_bio_wq_end_io(root->fs_info, bio,
@@ -7627,7 +7624,7 @@ static inline int submit_dio_repair_bio(struct inode *inode, struct bio *bio,
if (ret)
goto err;
- ret = btrfs_map_bio(root, rw, bio, mirror_num, 0);
+ ret = btrfs_map_bio(root, REQ_OP_READ, op_flags, bio, mirror_num, 0);
err:
bio_put(bio);
return ret;
@@ -7939,9 +7936,11 @@ out_test:
bio_put(bio);
}
-static int __btrfs_submit_bio_start_direct_io(struct inode *inode, int rw,
- struct bio *bio, int mirror_num,
- unsigned long bio_flags, u64 offset)
+static int __btrfs_submit_bio_start_direct_io(struct inode *inode, int op,
+ int op_flags, struct bio *bio,
+ int mirror_num,
+ unsigned long bio_flags,
+ u64 offset)
{
int ret;
struct btrfs_root *root = BTRFS_I(inode)->root;
@@ -8032,11 +8031,11 @@ static inline int btrfs_lookup_and_bind_dio_csum(struct btrfs_root *root,
}
static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode,
- int rw, u64 file_offset, int skip_sum,
- int async_submit)
+ int op, int op_flags, u64 file_offset,
+ int skip_sum, int async_submit)
{
struct btrfs_dio_private *dip = bio->bi_private;
- int write = rw & REQ_WRITE;
+ bool write = op == REQ_OP_WRITE ? true : false;
struct btrfs_root *root = BTRFS_I(inode)->root;
int ret;
@@ -8057,7 +8056,7 @@ static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode,
if (write && async_submit) {
ret = btrfs_wq_submit_bio(root->fs_info,
- inode, rw, bio, 0, 0,
+ inode, op, op_flags, bio, 0, 0,
file_offset,
__btrfs_submit_bio_start_direct_io,
__btrfs_submit_bio_done);
@@ -8077,14 +8076,14 @@ static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode,
goto err;
}
map:
- ret = btrfs_map_bio(root, rw, bio, 0, async_submit);
+ ret = btrfs_map_bio(root, op, op_flags, bio, 0, async_submit);
err:
bio_put(bio);
return ret;
}
-static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip,
- int skip_sum)
+static int btrfs_submit_direct_hook(int op, int op_flags,
+ struct btrfs_dio_private *dip, int skip_sum)
{
struct inode *inode = dip->inode;
struct btrfs_root *root = BTRFS_I(inode)->root;
@@ -8100,7 +8099,7 @@ static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip,
int async_submit = 0;
map_length = orig_bio->bi_iter.bi_size;
- ret = btrfs_map_block(root->fs_info, rw, start_sector << 9,
+ ret = btrfs_map_block(root->fs_info, op, start_sector << 9,
&map_length, NULL, 0);
if (ret)
return -EIO;
@@ -8137,7 +8136,7 @@ static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip,
* before we're done setting it up
*/
atomic_inc(&dip->pending_bios);
- ret = __btrfs_submit_dio_bio(bio, inode, rw,
+ ret = __btrfs_submit_dio_bio(bio, inode, op, op_flags,
file_offset, skip_sum,
async_submit);
if (ret) {
@@ -8161,7 +8160,7 @@ static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip,
btrfs_io_bio(bio)->logical = file_offset;
map_length = orig_bio->bi_iter.bi_size;
- ret = btrfs_map_block(root->fs_info, rw,
+ ret = btrfs_map_block(root->fs_info, op,
start_sector << 9,
&map_length, NULL, 0);
if (ret) {
@@ -8176,8 +8175,8 @@ static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip,
}
submit:
- ret = __btrfs_submit_dio_bio(bio, inode, rw, file_offset, skip_sum,
- async_submit);
+ ret = __btrfs_submit_dio_bio(bio, inode, op, op_flags, file_offset,
+ skip_sum, async_submit);
if (!ret)
return 0;
@@ -8238,7 +8237,7 @@ static void btrfs_submit_direct(int op, int op_flags, struct bio *dio_bio,
dip->subio_endio = btrfs_subio_endio_read;
}
- ret = btrfs_submit_direct_hook(op | op_flags, dip, skip_sum);
+ ret = btrfs_submit_direct_hook(op, op_flags, dip, skip_sum);
if (!ret)
return;
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 9cb6f9a..8b72207 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -1784,7 +1784,7 @@ static void scrub_wr_submit(struct scrub_ctx *sctx)
* orders the requests before sending them to the driver which
* doubled the write performance on spinning disks when measured
* with Linux 3.5 */
- btrfsic_submit_bio(WRITE, sbio->bio);
+ btrfsic_submit_bio(REQ_OP_WRITE, 0, sbio->bio);
}
static void scrub_wr_bio_end_io(struct bio *bio)
@@ -2084,7 +2084,7 @@ static void scrub_submit(struct scrub_ctx *sctx)
sbio = sctx->bios[sctx->curr];
sctx->curr = -1;
scrub_pending_bio_inc(sctx);
- btrfsic_submit_bio(READ, sbio->bio);
+ btrfsic_submit_bio(REQ_OP_READ, 0, sbio->bio);
}
static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 6fc73586..3dfac71 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -367,7 +367,7 @@ loop_lock:
sync_pending = 0;
}
- btrfsic_submit_bio(cur->bi_rw, cur);
+ btrfsic_submit_bio(cur->bi_rw, 0, cur);
num_run++;
batch_run++;
@@ -5091,7 +5091,7 @@ void btrfs_put_bbio(struct btrfs_bio *bbio)
kfree(bbio);
}
-static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
+static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int op,
u64 logical, u64 *length,
struct btrfs_bio **bbio_ret,
int mirror_num, int need_raid_map)
@@ -5169,7 +5169,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
raid56_full_stripe_start *= full_stripe_len;
}
- if (rw & REQ_DISCARD) {
+ if (op == REQ_OP_DISCARD) {
/* we don't discard raid56 yet */
if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) {
ret = -EOPNOTSUPP;
@@ -5182,7 +5182,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
For other RAID types and for RAID[56] reads, just allow a single
stripe (on a single disk). */
if ((map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) &&
- (rw & REQ_WRITE)) {
+ (op == REQ_OP_WRITE)) {
max_len = stripe_len * nr_data_stripes(map) -
(offset - raid56_full_stripe_start);
} else {
@@ -5205,8 +5205,8 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
btrfs_dev_replace_unlock(dev_replace);
if (dev_replace_is_ongoing && mirror_num == map->num_stripes + 1 &&
- !(rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS)) &&
- dev_replace->tgtdev != NULL) {
+ !(op == REQ_OP_WRITE || op == REQ_OP_DISCARD ||
+ op == REQ_GET_READ_MIRRORS) && dev_replace->tgtdev != NULL) {
/*
* in dev-replace case, for repair case (that's the only
* case where the mirror is selected explicitly when
@@ -5295,15 +5295,17 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
(offset + *length);
if (map->type & BTRFS_BLOCK_GROUP_RAID0) {
- if (rw & REQ_DISCARD)
+ if (op == REQ_OP_DISCARD)
num_stripes = min_t(u64, map->num_stripes,
stripe_nr_end - stripe_nr_orig);
stripe_nr = div_u64_rem(stripe_nr, map->num_stripes,
&stripe_index);
- if (!(rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS)))
+ if (!(op == REQ_OP_WRITE || op == REQ_OP_DISCARD ||
+ op == REQ_GET_READ_MIRRORS))
mirror_num = 1;
} else if (map->type & BTRFS_BLOCK_GROUP_RAID1) {
- if (rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS))
+ if (op == REQ_OP_WRITE || op == REQ_OP_DISCARD ||
+ op == REQ_GET_READ_MIRRORS)
num_stripes = map->num_stripes;
else if (mirror_num)
stripe_index = mirror_num - 1;
@@ -5316,7 +5318,8 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
}
} else if (map->type & BTRFS_BLOCK_GROUP_DUP) {
- if (rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS)) {
+ if (op == REQ_OP_WRITE || op == REQ_OP_DISCARD ||
+ op == REQ_GET_READ_MIRRORS) {
num_stripes = map->num_stripes;
} else if (mirror_num) {
stripe_index = mirror_num - 1;
@@ -5330,9 +5333,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
stripe_nr = div_u64_rem(stripe_nr, factor, &stripe_index);
stripe_index *= map->sub_stripes;
- if (rw & (REQ_WRITE | REQ_GET_READ_MIRRORS))
+ if (op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS)
num_stripes = map->sub_stripes;
- else if (rw & REQ_DISCARD)
+ else if (op == REQ_OP_DISCARD)
num_stripes = min_t(u64, map->sub_stripes *
(stripe_nr_end - stripe_nr_orig),
map->num_stripes);
@@ -5350,7 +5353,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
} else if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) {
if (need_raid_map &&
- ((rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) ||
+ ((op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS) ||
mirror_num > 1)) {
/* push stripe_nr back to the start of the full stripe */
stripe_nr = div_u64(raid56_full_stripe_start,
@@ -5378,8 +5381,8 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
/* We distribute the parity blocks across stripes */
div_u64_rem(stripe_nr + stripe_index, map->num_stripes,
&stripe_index);
- if (!(rw & (REQ_WRITE | REQ_DISCARD |
- REQ_GET_READ_MIRRORS)) && mirror_num <= 1)
+ if (!(op == REQ_OP_WRITE || op == REQ_OP_DISCARD ||
+ op == REQ_GET_READ_MIRRORS) && mirror_num <= 1)
mirror_num = 1;
}
} else {
@@ -5396,9 +5399,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
num_alloc_stripes = num_stripes;
if (dev_replace_is_ongoing) {
- if (rw & (REQ_WRITE | REQ_DISCARD))
+ if (op == REQ_OP_WRITE || op == REQ_OP_DISCARD)
num_alloc_stripes <<= 1;
- if (rw & REQ_GET_READ_MIRRORS)
+ if (op == REQ_GET_READ_MIRRORS)
num_alloc_stripes++;
tgtdev_indexes = num_stripes;
}
@@ -5413,7 +5416,8 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
/* build raid_map */
if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK &&
- need_raid_map && ((rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) ||
+ need_raid_map &&
+ ((op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS) ||
mirror_num > 1)) {
u64 tmp;
unsigned rot;
@@ -5438,7 +5442,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
RAID6_Q_STRIPE;
}
- if (rw & REQ_DISCARD) {
+ if (op == REQ_OP_DISCARD) {
u32 factor = 0;
u32 sub_stripes = 0;
u64 stripes_per_dev = 0;
@@ -5518,14 +5522,15 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
}
}
- if (rw & (REQ_WRITE | REQ_GET_READ_MIRRORS))
+ if (op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS)
max_errors = btrfs_chunk_max_errors(map);
if (bbio->raid_map)
sort_parity_stripes(bbio, num_stripes);
tgtdev_indexes = 0;
- if (dev_replace_is_ongoing && (rw & (REQ_WRITE | REQ_DISCARD)) &&
+ if (dev_replace_is_ongoing &&
+ (op == REQ_OP_WRITE || op == REQ_OP_DISCARD) &&
dev_replace->tgtdev != NULL) {
int index_where_to_add;
u64 srcdev_devid = dev_replace->srcdev->devid;
@@ -5560,7 +5565,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
}
}
num_stripes = index_where_to_add;
- } else if (dev_replace_is_ongoing && (rw & REQ_GET_READ_MIRRORS) &&
+ } else if (dev_replace_is_ongoing && (op == REQ_GET_READ_MIRRORS) &&
dev_replace->tgtdev != NULL) {
u64 srcdev_devid = dev_replace->srcdev->devid;
int index_srcdev = 0;
@@ -5637,8 +5642,8 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
u64 logical, u64 *length,
struct btrfs_bio **bbio_ret, int mirror_num)
{
- return __btrfs_map_block(fs_info, rw, logical, length, bbio_ret,
- mirror_num, 0);
+ return __btrfs_map_block(fs_info, rw, logical, length,
+ bbio_ret, mirror_num, 0);
}
/* For Scrub/replace */
@@ -5815,7 +5820,7 @@ static void btrfs_end_bio(struct bio *bio)
*/
static noinline void btrfs_schedule_bio(struct btrfs_root *root,
struct btrfs_device *device,
- int rw, struct bio *bio)
+ int op, int op_flags, struct bio *bio)
{
int should_queue = 1;
struct btrfs_pending_bios *pending_bios;
@@ -5826,9 +5831,9 @@ static noinline void btrfs_schedule_bio(struct btrfs_root *root,
}
/* don't bother with additional async steps for reads, right now */
- if (!(rw & REQ_WRITE)) {
+ if (!(op == REQ_OP_WRITE)) {
bio_get(bio);
- btrfsic_submit_bio(rw, bio);
+ btrfsic_submit_bio(op, op_flags, bio);
bio_put(bio);
return;
}
@@ -5842,7 +5847,7 @@ static noinline void btrfs_schedule_bio(struct btrfs_root *root,
atomic_inc(&root->fs_info->nr_async_bios);
WARN_ON(bio->bi_next);
bio->bi_next = NULL;
- bio->bi_rw |= rw;
+ bio->bi_rw |= op | op_flags;
spin_lock(&device->io_lock);
if (bio->bi_rw & REQ_SYNC)
@@ -5868,7 +5873,7 @@ static noinline void btrfs_schedule_bio(struct btrfs_root *root,
static void submit_stripe_bio(struct btrfs_root *root, struct btrfs_bio *bbio,
struct bio *bio, u64 physical, int dev_nr,
- int rw, int async)
+ int op, int op_flags, int async)
{
struct btrfs_device *dev = bbio->stripes[dev_nr].dev;
@@ -5882,8 +5887,8 @@ static void submit_stripe_bio(struct btrfs_root *root, struct btrfs_bio *bbio,
rcu_read_lock();
name = rcu_dereference(dev->name);
- pr_debug("btrfs_map_bio: rw %d, sector=%llu, dev=%lu "
- "(%s id %llu), size=%u\n", rw,
+ pr_debug("btrfs_map_bio: rw %d,%d, sector=%llu, dev=%lu "
+ "(%s id %llu), size=%u\n", op, op_flags,
(u64)bio->bi_iter.bi_sector, (u_long)dev->bdev->bd_dev,
name->str, dev->devid, bio->bi_iter.bi_size);
rcu_read_unlock();
@@ -5894,9 +5899,9 @@ static void submit_stripe_bio(struct btrfs_root *root, struct btrfs_bio *bbio,
btrfs_bio_counter_inc_noblocked(root->fs_info);
if (async)
- btrfs_schedule_bio(root, dev, rw, bio);
+ btrfs_schedule_bio(root, dev, op, op_flags, bio);
else
- btrfsic_submit_bio(rw, bio);
+ btrfsic_submit_bio(op, op_flags, bio);
}
static void bbio_error(struct btrfs_bio *bbio, struct bio *bio, u64 logical)
@@ -5913,8 +5918,8 @@ static void bbio_error(struct btrfs_bio *bbio, struct bio *bio, u64 logical)
}
}
-int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
- int mirror_num, int async_submit)
+int btrfs_map_bio(struct btrfs_root *root, int op, int op_flags,
+ struct bio *bio, int mirror_num, int async_submit)
{
struct btrfs_device *dev;
struct bio *first_bio = bio;
@@ -5930,8 +5935,8 @@ int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
map_length = length;
btrfs_bio_counter_inc_blocked(root->fs_info);
- ret = __btrfs_map_block(root->fs_info, rw, logical, &map_length, &bbio,
- mirror_num, 1);
+ ret = __btrfs_map_block(root->fs_info, op, logical, &map_length, &bbio,
+ mirror_num, 1);
if (ret) {
btrfs_bio_counter_dec(root->fs_info);
return ret;
@@ -5947,7 +5952,7 @@ int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
if (bbio->raid_map) {
/* In this case, map_length has been set to the length of
a single stripe; not the whole write */
- if (rw & WRITE) {
+ if (op == REQ_OP_WRITE) {
ret = raid56_parity_write(root, bio, bbio, map_length);
} else {
ret = raid56_parity_recover(root, bio, bbio, map_length,
@@ -5966,7 +5971,8 @@ int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
for (dev_nr = 0; dev_nr < total_devs; dev_nr++) {
dev = bbio->stripes[dev_nr].dev;
- if (!dev || !dev->bdev || (rw & WRITE && !dev->writeable)) {
+ if (!dev || !dev->bdev ||
+ (op == REQ_OP_WRITE && !dev->writeable)) {
bbio_error(bbio, first_bio, logical);
continue;
}
@@ -5978,8 +5984,8 @@ int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
bio = first_bio;
submit_stripe_bio(root, bbio, bio,
- bbio->stripes[dev_nr].physical, dev_nr, rw,
- async_submit);
+ bbio->stripes[dev_nr].physical, dev_nr, op,
+ op_flags, async_submit);
}
btrfs_bio_counter_dec(root->fs_info);
return 0;
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 595279a..22cd8c3 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -426,8 +426,8 @@ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
struct btrfs_root *extent_root, u64 type);
void btrfs_mapping_init(struct btrfs_mapping_tree *tree);
void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree);
-int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio,
- int mirror_num, int async_submit);
+int btrfs_map_bio(struct btrfs_root *root, int op, int op_flags,
+ struct bio *bio, int mirror_num, int async_submit);
int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
fmode_t flags, void *holder);
int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
--
1.8.3.1
On 11/04/2015 08:32 AM, [email protected] wrote:
> There are a couple new block layer commands we are trying to add support
> for in the near term:
>
> compare and write
> http://www.spinics.net/lists/target-devel/msg07826.html
>
> copy offload/extended copy/xcopy
> https://www.redhat.com/archives/dm-devel/2014-July/msg00070.html
>
> The problem is if we contine to add more commands we will have to one day
> extend the cmd_flags/bi_rw fields again. To prevent that, this patchset
> separates the operation (REQ_WRITE, REQ_DISCARD, REQ_WRITE_SAME, etc) from
> the flags (REQ_SYNC, REQ_QUIET, etc) in the bio and request structs. In the
> end of this set, we will have two fields bio->bi_op/request->op and
> bio->bi_rw/request->cmd_flags.
>
> The patches were made against Jens's linux-block tree's for-linus branch:
> https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git/log/?h=for-linus
> (last commit a22c4d7e34402ccdf3414f64c50365436eba7b93).
>
> I have done some basic testing for a lot of the drivers and filesystems,
> but I wanted to get comments before trying to track down more hardware/
> systems for testing.
>
>
> Known issues:
> - REQ_FLUSH is still a flag, but should probably be a operation.
> For lower level drivers like SCSI where we only get a flush, it makes
> more sense to be a operation. However, upper layers like filesystems
> can send down flushes with writes, so it is more of a flag for them.
> I am still working on this.
>
> - There is a regression with the dm flakey target. It currently
> cannot corrupt the operation values.
>
> - The patchset is a little awkward. It touches so much code,
> but I wanted to maintain git bisectibility, so there is lots of compat code
> left around until the last patches where everyting is cleaned up.
Hello Mike,
If you have to touch submit_bio() and submit_bio_wait(), how about
requiring the callers of these functions to set the cmd and flags
arguments in the bio structure and to leave out the cmd and flags
arguments from the submit_bio() and submit_bio_wait() functions ? A
(compile tested only) patch that implements this idea is available at
https://lkml.org/lkml/2014/6/2/173.
Bart.
On 11/04/2015 10:49 AM, Bart Van Assche wrote:
> Hello Mike,
>
> If you have to touch submit_bio() and submit_bio_wait(), how about
> requiring the callers of these functions to set the cmd and flags
> arguments in the bio structure and to leave out the cmd and flags
> arguments from the submit_bio() and submit_bio_wait() functions ? A
> (compile tested only) patch that implements this idea is available at
> https://lkml.org/lkml/2014/6/2/173.
>
Yeah, I can do that.
On Wed, Nov 04, 2015 at 10:53:39AM -0600, Mike Christie wrote:
> > If you have to touch submit_bio() and submit_bio_wait(), how about
> > requiring the callers of these functions to set the cmd and flags
> > arguments in the bio structure and to leave out the cmd and flags
> > arguments from the submit_bio() and submit_bio_wait() functions ? A
> > (compile tested only) patch that implements this idea is available at
> > https://lkml.org/lkml/2014/6/2/173.
> >
>
> Yeah, I can do that.
I think this would be useful to do at the beginning of the series.
While this will have a huge trickle effect through the series it should
make thing a bit simpler overall. By leaving op 0 undefined it should
also help to create a nice error trap for unconverted code.
On 11/07/2015 04:23 AM, Christoph Hellwig wrote:
> On Wed, Nov 04, 2015 at 10:53:39AM -0600, Mike Christie wrote:
>>> If you have to touch submit_bio() and submit_bio_wait(), how about
>>> requiring the callers of these functions to set the cmd and flags
>>> arguments in the bio structure and to leave out the cmd and flags
>>> arguments from the submit_bio() and submit_bio_wait() functions ? A
>>> (compile tested only) patch that implements this idea is available at
>>> https://lkml.org/lkml/2014/6/2/173.
>>>
>>
>> Yeah, I can do that.
>
> I think this would be useful to do at the beginning of the series.
I just wanted to double check that we wanted to do this.
We no longer have the bvec merge functions so the original reason given
in the thread/patch Bart referenced is no longer valid.
Offlist it was suggested that dropping the argument from submit_bio
might still improve performance, but I modified xfs and dm and did some
testing and did not see anything.
So the change is not needed, and it would only be done because people
feel it would improve the interface.
On Wed, Nov 11, 2015 at 01:53:24AM -0600, Mike Christie wrote:
> We no longer have the bvec merge functions so the original reason given
> in the thread/patch Bart referenced is no longer valid.
>
> Offlist it was suggested that dropping the argument from submit_bio
> might still improve performance, but I modified xfs and dm and did some
> testing and did not see anything.
>
> So the change is not needed, and it would only be done because people
> feel it would improve the interface.
I would defintively prefer the changed interface, and to me it also
looks like it would make your overall patch simpler. If you disagree
feel free to keep it as-is for now and I'll do another pass later.
On Wed, Nov 11 2015 at 6:28am -0500,
Christoph Hellwig <[email protected]> wrote:
> On Wed, Nov 11, 2015 at 01:53:24AM -0600, Mike Christie wrote:
> > We no longer have the bvec merge functions so the original reason given
> > in the thread/patch Bart referenced is no longer valid.
> >
> > Offlist it was suggested that dropping the argument from submit_bio
> > might still improve performance, but I modified xfs and dm and did some
> > testing and did not see anything.
> >
> > So the change is not needed, and it would only be done because people
> > feel it would improve the interface.
>
> I would defintively prefer the changed interface, and to me it also
> looks like it would make your overall patch simpler. If you disagree
> feel free to keep it as-is for now and I'll do another pass later.
The less churn the better. But honestly, not quite following the logic
on why all this flag-day churn is needed. BUT if it needs to happen,
because we'll soon hit a wall on supported operations via REQ_*, now is
as good a time as any -- but best to limit the change to the flags IMHO.