Hello Jens,
Add IORING_OP_FUSED_CMD, it is one special URING_CMD, the 1st SQE(primary) is
one 64byte URING_CMD, and the 2nd 64byte SQE(secondary) is another normal
64byte OP. The primary command provides device/file io buffer and
submits OP represented by the secondary SQE using the provided buffer. This way
solves ublk zero copy problem easily, since io buffer shares same lifetime with
the primary command.
The secondary OP is actually submitted from kernel, part of this idea is from
Xiaoguang's ublk ebpf patchset, but this patchset submits secondary OP just
like normal OP issued from userspace, that said, SQE order is kept, and
batching handling is done too.
Please see detailed design in commit log of the 2th patch, and one big
point is how to handle buffer lifetime/ownership.
With this way, it is easy to support zero copy for ublk/fuse device.
Basically userspace can specify any sub-buffer of the ublk block request
buffer from the fused command just by setting 'offset/len'
in the secondary SQE for running secondary OP. This way is flexible to implement
io mapping: mirror, stripped, ...
The 4th & 5th patches enable fused secondary support for the following OPs:
OP_READ/OP_WRITE
OP_SEND/OP_RECV/OP_SEND_ZC
The other ublk patches cleans ublk driver and implement fused command
for supporting zero copy.
Follows userspace code, which supports 128byte SQE fused command only:
https://github.com/ming1/ubdsrv/tree/fused-cmd-zc-for-v5
All three(loop, nbd and qcow2) ublk targets have supported zero copy by passing:
ublk add -t [loop|nbd|qcow2] -z ....
Basic fs mount/kernel building and builtin test are done, and also not
observe regression on xfstest test over ublk-loop with zero copy.
Also add liburing test case for covering fused command based on miniublk
of blktest(supports 64byte normal SQE only)
https://github.com/ming1/liburing/tree/fused_cmd_miniublk_for_v5
Performance improvement is obvious on memory bandwidth related workloads,
such as, 1~2X improvement on 64K/512K BS IO test on loop with ramfs backing file.
ublk-null shows 5X IOPS improvement on big BS test when the copy is avoided.
Please review and consider for v6.4.
V5:
- rebase on for-6.4/io_uring
- rename to primary/secondary as suggested by Jens
- reserve interface for extending to support multiple secondary OPs in future,
which isn't a must, because it can be done by submitting multiple fused
commands with same primary request
- rename to primary/secondary in ublksrv and liburing test code
V4:
- improve APIs naming(patch 1 ~ 4)
- improve documents and commit log(patch 2)
- add buffer direction bit to opdef, suggested by Jens(patch 2)
- add ublk zero copy document for cover: technical requirements(most related with
buffer lifetime), and explains why splice isn't good and how fused command solves it(patch 17)
- fix sparse warning(patch 7)
- supports 64byte SQE fused command(patch 3)
V3:
- fix build warning reported by kernel test robot
- drop patch for checking fused flags on existed drivers with
->uring_command(), which isn't necessary, since we do not do that
when adding new ioctl or uring command
- inline io_init_rq() for core code, so just export io_init_secondary_req
- return result of failed secondary request unconditionally since REQ_F_CQE_SKIP
will be cleared
- pass xfstest over ublk-loop
V2:
- don't resue io_mapped_ubuf (io_uring)
- remove REQ_F_FUSED_MASTER_BIT (io_uring)
- fix compile warning (io_uring)
- rebase on v6.3-rc1 (io_uring)
- grabbing io request reference when handling fused command
- simplify ublk_copy_user_pages() by iov iterator
- add read()/write() for userspace to read/write ublk io buffer, so
that some corner cases(read zero, passthrough request(report zones)) can
be handled easily in case of zero copy; this way also helps to switch to
zero copy completely
- misc cleanup
Ming Lei (16):
io_uring: increase io_kiocb->flags into 64bit
io_uring: add IORING_OP_FUSED_CMD
io_uring: support normal SQE for fused command
io_uring: support OP_READ/OP_WRITE for fused secondary request
io_uring: support OP_SEND_ZC/OP_RECV for fused secondary request
block: ublk_drv: add common exit handling
block: ublk_drv: don't consider flush request in map/unmap io
block: ublk_drv: add two helpers to clean up map/unmap request
block: ublk_drv: clean up several helpers
block: ublk_drv: cleanup 'struct ublk_map_data'
block: ublk_drv: cleanup ublk_copy_user_pages
block: ublk_drv: grab request reference when the request is handled by
userspace
block: ublk_drv: support to copy any part of request pages
block: ublk_drv: add read()/write() support for ublk char device
block: ublk_drv: don't check buffer in case of zero copy
block: ublk_drv: apply io_uring FUSED_CMD for supporting zero copy
Documentation/block/ublk.rst | 126 ++++++-
drivers/block/ublk_drv.c | 603 ++++++++++++++++++++++++++-------
include/linux/io_uring.h | 50 ++-
include/linux/io_uring_types.h | 77 +++--
include/uapi/linux/io_uring.h | 11 +-
include/uapi/linux/ublk_cmd.h | 37 +-
io_uring/Makefile | 2 +-
io_uring/fused_cmd.c | 267 +++++++++++++++
io_uring/fused_cmd.h | 11 +
io_uring/io_uring.c | 51 ++-
io_uring/io_uring.h | 5 +
io_uring/net.c | 30 +-
io_uring/opdef.c | 22 ++
io_uring/opdef.h | 7 +
io_uring/rw.c | 21 ++
15 files changed, 1142 insertions(+), 178 deletions(-)
create mode 100644 io_uring/fused_cmd.c
create mode 100644 io_uring/fused_cmd.h
--
2.39.2
Start to allow fused secondary request to support OP_READ/OP_WRITE, and
the buffer can be retrieved from the primary request.
Once the secondary request is completed, the primary request buffer will
be returned back.
Signed-off-by: Ming Lei <[email protected]>
---
io_uring/opdef.c | 4 ++++
io_uring/rw.c | 21 +++++++++++++++++++++
2 files changed, 25 insertions(+)
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index 63b90e8e65f8..d81c9afd65ed 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -235,6 +235,8 @@ const struct io_issue_def io_issue_defs[] = {
.ioprio = 1,
.iopoll = 1,
.iopoll_queue = 1,
+ .fused_secondary = 1,
+ .buf_dir = WRITE,
.prep = io_prep_rw,
.issue = io_read,
},
@@ -248,6 +250,8 @@ const struct io_issue_def io_issue_defs[] = {
.ioprio = 1,
.iopoll = 1,
.iopoll_queue = 1,
+ .fused_secondary = 1,
+ .buf_dir = READ,
.prep = io_prep_rw,
.issue = io_write,
},
diff --git a/io_uring/rw.c b/io_uring/rw.c
index 5431caf1e331..d25eeee67c65 100644
--- a/io_uring/rw.c
+++ b/io_uring/rw.c
@@ -19,6 +19,7 @@
#include "kbuf.h"
#include "rsrc.h"
#include "rw.h"
+#include "fused_cmd.h"
struct io_rw {
/* NOTE: kiocb has the file as the first member, so don't do it here */
@@ -371,6 +372,18 @@ static struct iovec *__io_import_iovec(int ddir, struct io_kiocb *req,
size_t sqe_len;
ssize_t ret;
+ /*
+ * fused_secondary OP passes buffer offset from sqe->addr actually, since
+ * the fused cmd buf's mapped start address is zero.
+ */
+ if (req->flags & REQ_F_FUSED_SECONDARY) {
+ ret = io_import_buf_for_secondary(rw->addr, rw->len, ddir,
+ iter, req);
+ if (ret)
+ return ERR_PTR(ret);
+ return NULL;
+ }
+
if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
ret = io_import_fixed(ddir, iter, req->imu, rw->addr, rw->len);
if (ret)
@@ -443,11 +456,19 @@ static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
*/
static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
{
+ struct io_kiocb *req = cmd_to_io_kiocb(rw);
struct kiocb *kiocb = &rw->kiocb;
struct file *file = kiocb->ki_filp;
ssize_t ret = 0;
loff_t *ppos;
+ /*
+ * Fused secondary req hasn't user buffer, so ->read/->write can't
+ * be supported
+ */
+ if (req->flags & REQ_F_FUSED_SECONDARY)
+ return -EOPNOTSUPP;
+
/*
* Don't support polled IO through this interface, and we can't
* support non-blocking either. For the latter, this just causes
--
2.39.2
We are going to support zero copy by fused uring command, the userspace
can't read from or write to the io buffer any more, it becomes not
flexible for applications:
1) some targets need to zero buffer explicitly, such as when reading
unmapped qcow2 cluster
2) some targets need to support passthrough command, such as zoned
report zones, and still need to read/write the io buffer
Support pread()/pwrite() on ublk char device for reading/writing request
io buffer, so ublk server can handle the above cases easily.
This also can help to make zero copy becoming the primary option, and
non-zero-copy will become legacy code path since the added read()/write()
can cover non-zero-copy feature.
Signed-off-by: Ming Lei <[email protected]>
---
drivers/block/ublk_drv.c | 131 ++++++++++++++++++++++++++++++++++
include/uapi/linux/ublk_cmd.h | 31 +++++++-
2 files changed, 161 insertions(+), 1 deletion(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 32304942ab87..03ad33686808 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1322,6 +1322,36 @@ static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,
ublk_queue_cmd(ubq, req);
}
+static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
+ struct ublk_queue *ubq, int tag, size_t offset)
+{
+ struct request *req;
+
+ if (!ublk_support_zc(ubq))
+ return NULL;
+
+ req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
+ if (!req)
+ return NULL;
+
+ if (!ublk_get_req_ref(ubq, req))
+ return NULL;
+
+ if (unlikely(!blk_mq_request_started(req) || req->tag != tag))
+ goto fail_put;
+
+ if (!ublk_rq_has_data(req))
+ goto fail_put;
+
+ if (offset > blk_rq_bytes(req))
+ goto fail_put;
+
+ return req;
+fail_put:
+ ublk_put_req_ref(ubq, req);
+ return NULL;
+}
+
static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
{
struct ublksrv_io_cmd *ub_cmd = (struct ublksrv_io_cmd *)cmd->cmd;
@@ -1423,11 +1453,112 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return -EIOCBQUEUED;
}
+static inline bool ublk_check_ubuf_dir(const struct request *req,
+ int ubuf_dir)
+{
+ /* copy ubuf to request pages */
+ if (req_op(req) == REQ_OP_READ && ubuf_dir == ITER_SOURCE)
+ return true;
+
+ /* copy request pages to ubuf */
+ if (req_op(req) == REQ_OP_WRITE && ubuf_dir == ITER_DEST)
+ return true;
+
+ return false;
+}
+
+static struct request *ublk_check_and_get_req(struct kiocb *iocb,
+ struct iov_iter *iter, size_t *off, int dir)
+{
+ struct ublk_device *ub = iocb->ki_filp->private_data;
+ struct ublk_queue *ubq;
+ struct request *req;
+ size_t buf_off;
+ u16 tag, q_id;
+
+ if (!ub)
+ return ERR_PTR(-EACCES);
+
+ if (!user_backed_iter(iter))
+ return ERR_PTR(-EACCES);
+
+ if (ub->dev_info.state == UBLK_S_DEV_DEAD)
+ return ERR_PTR(-EACCES);
+
+ tag = ublk_pos_to_tag(iocb->ki_pos);
+ q_id = ublk_pos_to_hwq(iocb->ki_pos);
+ buf_off = ublk_pos_to_buf_offset(iocb->ki_pos);
+
+ if (q_id >= ub->dev_info.nr_hw_queues)
+ return ERR_PTR(-EINVAL);
+
+ ubq = ublk_get_queue(ub, q_id);
+ if (!ubq)
+ return ERR_PTR(-EINVAL);
+
+ if (tag >= ubq->q_depth)
+ return ERR_PTR(-EINVAL);
+
+ req = __ublk_check_and_get_req(ub, ubq, tag, buf_off);
+ if (!req)
+ return ERR_PTR(-EINVAL);
+
+ if (!req->mq_hctx || !req->mq_hctx->driver_data)
+ goto fail;
+
+ if (!ublk_check_ubuf_dir(req, dir))
+ goto fail;
+
+ *off = buf_off;
+ return req;
+fail:
+ ublk_put_req_ref(ubq, req);
+ return ERR_PTR(-EACCES);
+}
+
+static ssize_t ublk_ch_read_iter(struct kiocb *iocb, struct iov_iter *to)
+{
+ struct ublk_queue *ubq;
+ struct request *req;
+ size_t buf_off;
+ size_t ret;
+
+ req = ublk_check_and_get_req(iocb, to, &buf_off, ITER_DEST);
+ if (unlikely(IS_ERR(req)))
+ return PTR_ERR(req);
+
+ ret = ublk_copy_user_pages(req, buf_off, to, ITER_DEST);
+ ubq = req->mq_hctx->driver_data;
+ ublk_put_req_ref(ubq, req);
+
+ return ret;
+}
+
+static ssize_t ublk_ch_write_iter(struct kiocb *iocb, struct iov_iter *from)
+{
+ struct ublk_queue *ubq;
+ struct request *req;
+ size_t buf_off;
+ size_t ret;
+
+ req = ublk_check_and_get_req(iocb, from, &buf_off, ITER_SOURCE);
+ if (unlikely(IS_ERR(req)))
+ return PTR_ERR(req);
+
+ ret = ublk_copy_user_pages(req, buf_off, from, ITER_SOURCE);
+ ubq = req->mq_hctx->driver_data;
+ ublk_put_req_ref(ubq, req);
+
+ return ret;
+}
+
static const struct file_operations ublk_ch_fops = {
.owner = THIS_MODULE,
.open = ublk_ch_open,
.release = ublk_ch_release,
.llseek = no_llseek,
+ .read_iter = ublk_ch_read_iter,
+ .write_iter = ublk_ch_write_iter,
.uring_cmd = ublk_ch_uring_cmd,
.mmap = ublk_ch_mmap,
};
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index f6238ccc7800..d1a6b3dc0327 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -54,7 +54,36 @@
#define UBLKSRV_IO_BUF_OFFSET 0x80000000
/* tag bit is 12bit, so at most 4096 IOs for each queue */
-#define UBLK_MAX_QUEUE_DEPTH 4096
+#define UBLK_TAG_BITS 12
+#define UBLK_MAX_QUEUE_DEPTH (1U << UBLK_TAG_BITS)
+
+/* used for locating each io buffer for pread()/pwrite() on char device */
+#define UBLK_BUFS_SIZE_BITS 42
+#define UBLK_BUFS_SIZE_MASK ((1ULL << UBLK_BUFS_SIZE_BITS) - 1)
+#define UBLK_BUF_SIZE_BITS (UBLK_BUFS_SIZE_BITS - UBLK_TAG_BITS)
+#define UBLK_BUF_MAX_SIZE (1ULL << UBLK_BUF_SIZE_BITS)
+
+static inline __u16 ublk_pos_to_hwq(__u64 pos)
+{
+ return pos >> UBLK_BUFS_SIZE_BITS;
+}
+
+static inline __u32 ublk_pos_to_buf_offset(__u64 pos)
+{
+ return (pos & UBLK_BUFS_SIZE_MASK) & (UBLK_BUF_MAX_SIZE - 1);
+}
+
+static inline __u16 ublk_pos_to_tag(__u64 pos)
+{
+ return (pos & UBLK_BUFS_SIZE_MASK) >> UBLK_BUF_SIZE_BITS;
+}
+
+/* offset of single buffer, which has to be < UBLK_BUX_MAX_SIZE */
+static inline __u64 ublk_pos(__u16 q_id, __u16 tag, __u32 offset)
+{
+ return (((__u64)q_id) << UBLK_BUFS_SIZE_BITS) |
+ ((((__u64)tag) << UBLK_BUF_SIZE_BITS) + offset);
+}
/*
* zero copy requires 4k block size, and can remap ublk driver's io
--
2.39.2
Clean up ublk_copy_user_pages() by using iov iter, and code
gets simplified a lot and becomes much more readable than before.
Signed-off-by: Ming Lei <[email protected]>
---
drivers/block/ublk_drv.c | 112 +++++++++++++++++----------------------
1 file changed, 49 insertions(+), 63 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index fdccbf5fdaa1..cca0e95a89d8 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -419,49 +419,39 @@ static const struct block_device_operations ub_fops = {
#define UBLK_MAX_PIN_PAGES 32
-struct ublk_map_data {
- const struct request *rq;
- unsigned long ubuf;
- unsigned int len;
-};
-
struct ublk_io_iter {
struct page *pages[UBLK_MAX_PIN_PAGES];
- unsigned pg_off; /* offset in the 1st page in pages */
- int nr_pages; /* how many page pointers in pages */
struct bio *bio;
struct bvec_iter iter;
};
-static inline unsigned ublk_copy_io_pages(struct ublk_io_iter *data,
- unsigned max_bytes, bool to_vm)
+/* return how many pages are copied */
+static void ublk_copy_io_pages(struct ublk_io_iter *data,
+ size_t total, size_t pg_off, int dir)
{
- const unsigned total = min_t(unsigned, max_bytes,
- PAGE_SIZE - data->pg_off +
- ((data->nr_pages - 1) << PAGE_SHIFT));
unsigned done = 0;
unsigned pg_idx = 0;
while (done < total) {
struct bio_vec bv = bio_iter_iovec(data->bio, data->iter);
- const unsigned int bytes = min3(bv.bv_len, total - done,
- (unsigned)(PAGE_SIZE - data->pg_off));
+ unsigned int bytes = min3(bv.bv_len, (unsigned)total - done,
+ (unsigned)(PAGE_SIZE - pg_off));
void *bv_buf = bvec_kmap_local(&bv);
void *pg_buf = kmap_local_page(data->pages[pg_idx]);
- if (to_vm)
- memcpy(pg_buf + data->pg_off, bv_buf, bytes);
+ if (dir == ITER_DEST)
+ memcpy(pg_buf + pg_off, bv_buf, bytes);
else
- memcpy(bv_buf, pg_buf + data->pg_off, bytes);
+ memcpy(bv_buf, pg_buf + pg_off, bytes);
kunmap_local(pg_buf);
kunmap_local(bv_buf);
/* advance page array */
- data->pg_off += bytes;
- if (data->pg_off == PAGE_SIZE) {
+ pg_off += bytes;
+ if (pg_off == PAGE_SIZE) {
pg_idx += 1;
- data->pg_off = 0;
+ pg_off = 0;
}
done += bytes;
@@ -475,41 +465,40 @@ static inline unsigned ublk_copy_io_pages(struct ublk_io_iter *data,
data->iter = data->bio->bi_iter;
}
}
-
- return done;
}
-static int ublk_copy_user_pages(struct ublk_map_data *data, bool to_vm)
+/*
+ * Copy data between request pages and io_iter, and 'offset'
+ * is the start point of linear offset of request.
+ */
+static size_t ublk_copy_user_pages(const struct request *req,
+ struct iov_iter *uiter, int dir)
{
- const unsigned int gup_flags = to_vm ? FOLL_WRITE : 0;
- const unsigned long start_vm = data->ubuf;
- unsigned int done = 0;
struct ublk_io_iter iter = {
- .pg_off = start_vm & (PAGE_SIZE - 1),
- .bio = data->rq->bio,
- .iter = data->rq->bio->bi_iter,
+ .bio = req->bio,
+ .iter = req->bio->bi_iter,
};
- const unsigned int nr_pages = round_up(data->len +
- (start_vm & (PAGE_SIZE - 1)), PAGE_SIZE) >> PAGE_SHIFT;
-
- while (done < nr_pages) {
- const unsigned to_pin = min_t(unsigned, UBLK_MAX_PIN_PAGES,
- nr_pages - done);
- unsigned i, len;
-
- iter.nr_pages = get_user_pages_fast(start_vm +
- (done << PAGE_SHIFT), to_pin, gup_flags,
- iter.pages);
- if (iter.nr_pages <= 0)
- return done == 0 ? iter.nr_pages : done;
- len = ublk_copy_io_pages(&iter, data->len, to_vm);
- for (i = 0; i < iter.nr_pages; i++) {
- if (to_vm)
+ size_t done = 0;
+
+ while (iov_iter_count(uiter) && iter.bio) {
+ unsigned nr_pages;
+ size_t len, off;
+ int i;
+
+ len = iov_iter_get_pages2(uiter, iter.pages,
+ iov_iter_count(uiter),
+ UBLK_MAX_PIN_PAGES, &off);
+ if (len <= 0)
+ return done;
+
+ ublk_copy_io_pages(&iter, len, off, dir);
+ nr_pages = DIV_ROUND_UP(len + off, PAGE_SIZE);
+ for (i = 0; i < nr_pages; i++) {
+ if (dir == ITER_DEST)
set_page_dirty(iter.pages[i]);
put_page(iter.pages[i]);
}
- data->len -= len;
- done += iter.nr_pages;
+ done += len;
}
return done;
@@ -536,15 +525,14 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
* context is pretty fast, see ublk_pin_user_pages
*/
if (ublk_need_map_req(req)) {
- struct ublk_map_data data = {
- .rq = req,
- .ubuf = io->addr,
- .len = rq_bytes,
- };
+ struct iov_iter iter;
+ struct iovec iov;
+ const int dir = ITER_DEST;
- ublk_copy_user_pages(&data, true);
+ import_single_range(dir, u64_to_user_ptr(io->addr), rq_bytes,
+ &iov, &iter);
- return rq_bytes - data.len;
+ return ublk_copy_user_pages(req, &iter, dir);
}
return rq_bytes;
}
@@ -556,17 +544,15 @@ static int ublk_unmap_io(const struct ublk_queue *ubq,
const unsigned int rq_bytes = blk_rq_bytes(req);
if (ublk_need_unmap_req(req)) {
- struct ublk_map_data data = {
- .rq = req,
- .ubuf = io->addr,
- .len = io->res,
- };
+ struct iov_iter iter;
+ struct iovec iov;
+ const int dir = ITER_SOURCE;
WARN_ON_ONCE(io->res > rq_bytes);
- ublk_copy_user_pages(&data, false);
-
- return io->res - data.len;
+ import_single_range(dir, u64_to_user_ptr(io->addr), io->res,
+ &iov, &iter);
+ return ublk_copy_user_pages(req, &iter, dir);
}
return rq_bytes;
}
--
2.39.2
In case of zero copy, ublk server needn't to pre-allocate IO buffer
and provide it to driver more.
Meantime not set the buffer in case of zero copy any more, and the
userspace can use pread()/pwrite() to read from/write to the io request
buffer, which is easier & simpler from userspace viewpoint.
Signed-off-by: Ming Lei <[email protected]>
---
drivers/block/ublk_drv.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 03ad33686808..a49b4de5ae1e 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1410,25 +1410,30 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
if (io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)
goto out;
/* FETCH_RQ has to provide IO buffer if NEED GET DATA is not enabled */
- if (!ub_cmd->addr && !ublk_need_get_data(ubq))
- goto out;
+ if (!ublk_support_zc(ubq)) {
+ if (!ub_cmd->addr && !ublk_need_get_data(ubq))
+ goto out;
+ io->addr = ub_cmd->addr;
+ }
io->cmd = cmd;
io->flags |= UBLK_IO_FLAG_ACTIVE;
- io->addr = ub_cmd->addr;
-
ublk_mark_io_ready(ub, ubq);
break;
case UBLK_IO_COMMIT_AND_FETCH_REQ:
req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag);
+
+ if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
+ goto out;
/*
* COMMIT_AND_FETCH_REQ has to provide IO buffer if NEED GET DATA is
* not enabled or it is Read IO.
*/
- if (!ub_cmd->addr && (!ublk_need_get_data(ubq) || req_op(req) == REQ_OP_READ))
- goto out;
- if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
- goto out;
- io->addr = ub_cmd->addr;
+ if (!ublk_support_zc(ubq)) {
+ if (!ub_cmd->addr && (!ublk_need_get_data(ubq) ||
+ req_op(req) == REQ_OP_READ))
+ goto out;
+ io->addr = ub_cmd->addr;
+ }
io->flags |= UBLK_IO_FLAG_ACTIVE;
io->cmd = cmd;
ublk_commit_completion(ub, ub_cmd);
--
2.39.2