2023-09-20 15:32:28

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 00/12] Implement copy offload support

The patch series covers the points discussed in past and most recently
in LSFMM'23[0].
We have covered the initial agreed requirements in this patch set and
further additional features suggested by community.

This is next iteration of our previous patch set v15[1].
Copy offload is performed using two bio's -
1. Take a plug
2. The first bio containing source info is prepared and sent,
a request is formed.
3. This is followed by preparing and sending the second bio containing the
destination info.
4. This bio is merged with the request containing the source info.
5. The plug is released, and the request containing source and destination
bio's is sent to the driver.
This design helps to avoid putting payload (token) in the request,
as sending payload that is not data to the device is considered a bad
approach.

So copy offload works only for request based storage drivers.
We can make use of copy emulation in case copy offload capability is
absent.

Overall series supports:
========================
1. Driver
- NVMe Copy command (single NS, TP 4065), including support
in nvme-target (for block and file back end).

2. Block layer
- Block-generic copy (REQ_OP_COPY_DST/SRC), operation with
interface accommodating two block-devs
- Merging copy requests in request layer
- Emulation, for in-kernel user when offload is natively
absent
- dm-linear support (for cases not requiring split)

3. User-interface
- copy_file_range

Testing
=======
Copy offload can be tested on:
a. QEMU: NVME simple copy (TP 4065). By setting nvme-ns
parameters mssrl,mcl, msrc. For more info [2].
b. Null block device
c. NVMe Fabrics loopback.
d. blktests[3]

Emulation can be tested on any device.

fio[4].

Infra and plumbing:
===================
We populate copy_file_range callback in def_blk_fops.
For devices that support copy-offload, use blkdev_copy_offload to
achieve in-device copy.
However for cases, where device doesn't support offload,
use generic_copy_file_range.
For in-kernel users (like NVMe fabrics), use blkdev_copy_offload
if device is copy offload capable or else use emulation
using blkdev_copy_emulation.
Modify checks in generic_copy_file_range to support block-device.

Blktests[3]
======================
tests/block/035-040: Runs copy offload and emulation on null
block device.
tests/block/050,055: Runs copy offload and emulation on test
nvme block device.
tests/nvme/056-067: Create a loop backed fabrics device and
run copy offload and emulation.

Future Work
===========
- loopback device copy offload support
- upstream fio to use copy offload
- upstream blktest to test copy offload
- update man pages for copy_file_range
- expand in-kernel users of copy offload

These are to be taken up after this minimal series is agreed upon.

Additional links:
=================
[0] https://lore.kernel.org/linux-nvme/CA+1E3rJ7BZ7LjQXXTdX+-0Edz=zT14mmPGMiVCzUgB33C60tbQ@mail.gmail.com/
https://lore.kernel.org/linux-nvme/[email protected]/
https://lore.kernel.org/linux-nvme/[email protected]/
[1] https://lore.kernel.org/all/[email protected]/
[2] https://qemu-project.gitlab.io/qemu/system/devices/nvme.html#simple-copy
[3] https://github.com/nitesh-shetty/blktests/tree/feat/copy_offload/v15
[4] https://github.com/OpenMPDK/fio/tree/copyoffload-3.35-v14

Thanks for review!

Changes since v15:
=================
- fs, nvmet: don't fallback to copy emulation for copy offload IO
failure, user can still use emulation by disabling
device offload (Hannes)
- block: patch,function description changes (Hannes)
- added Reviewed-by from Hannes and Luis.

Changes since v14:
=================
- block: (Bart Van Assche)
1. BLK_ prefix addition to COPY_MAX_BYES and COPY_MAX_SEGMENTS
2. Improved function,patch,cover-letter description
3. Simplified refcount updating.
- null-blk, nvme:
4. static warning fixes (kernel test robot)

Changes since v13:
=================
- block:
1. Simplified copy offload and emulation helpers, now
caller needs to decide between offload/emulation fallback
2. src,dst bio order change (Christoph Hellwig)
3. refcount changes similar to dio (Christoph Hellwig)
4. Single outstanding IO for copy emulation (Christoph Hellwig)
5. use copy_max_sectors to identify copy offload
capability and other reviews (Damien, Christoph)
6. Return status in endio handler (Christoph Hellwig)
- nvme-fabrics: fallback to emulation in case of partial
offload completion
- in kernel user addition (Ming lei)
- indentation, documentation, minor fixes, misc changes (Damien,
Christoph)
- blktests changes to test kernel changes

Changes since v12:
=================
- block,nvme: Replaced token based approach with request based
single namespace capable approach (Christoph Hellwig)

Changes since v11:
=================
- Documentation: Improved documentation (Damien Le Moal)
- block,nvme: ssize_t return values (Darrick J. Wong)
- block: token is allocated to SECTOR_SIZE (Matthew Wilcox)
- block: mem leak fix (Maurizio Lombardi)

Changes since v10:
=================
- NVMeOF: optimization in NVMe fabrics (Chaitanya Kulkarni)
- NVMeOF: sparse warnings (kernel test robot)

Changes since v9:
=================
- null_blk, improved documentation, minor fixes(Chaitanya Kulkarni)
- fio, expanded testing and minor fixes (Vincent Fu)

Changes since v8:
=================
- null_blk, copy_max_bytes_hw is made config fs parameter
(Damien Le Moal)
- Negative error handling in copy_file_range (Christian Brauner)
- minor fixes, better documentation (Damien Le Moal)
- fio upgraded to 3.34 (Vincent Fu)

Changes since v7:
=================
- null block copy offload support for testing (Damien Le Moal)
- adding direct flag check for copy offload to block device,
as we are using generic_copy_file_range for cached cases.
- Minor fixes

Changes since v6:
=================
- copy_file_range instead of ioctl for direct block device
- Remove support for multi range (vectored) copy
- Remove ioctl interface for copy.
- Remove offload support in dm kcopyd.

Changes since v5:
=================
- Addition of blktests (Chaitanya Kulkarni)
- Minor fix for fabrics file backed path
- Remove buggy zonefs copy file range implementation.

Changes since v4:
=================
- make the offload and emulation design asynchronous (Hannes
Reinecke)
- fabrics loopback support
- sysfs naming improvements (Damien Le Moal)
- use kfree() instead of kvfree() in cio_await_completion
(Damien Le Moal)
- use ranges instead of rlist to represent range_entry (Damien
Le Moal)
- change argument ordering in blk_copy_offload suggested (Damien
Le Moal)
- removed multiple copy limit and merged into only one limit
(Damien Le Moal)
- wrap overly long lines (Damien Le Moal)
- other naming improvements and cleanups (Damien Le Moal)
- correctly format the code example in description (Damien Le
Moal)
- mark blk_copy_offload as static (kernel test robot)

Changes since v3:
=================
- added copy_file_range support for zonefs
- added documentation about new sysfs entries
- incorporated review comments on v3
- minor fixes

Changes since v2:
=================
- fixed possible race condition reported by Damien Le Moal
- new sysfs controls as suggested by Damien Le Moal
- fixed possible memory leak reported by Dan Carpenter, lkp
- minor fixes

Changes since v1:
=================
- sysfs documentation (Greg KH)
- 2 bios for copy operation (Bart Van Assche, Mikulas Patocka,
Martin K. Petersen, Douglas Gilbert)
- better payload design (Darrick J. Wong)

Anuj Gupta (1):
fs/read_write: Enable copy_file_range for block device.

Nitesh Shetty (11):
block: Introduce queue limits and sysfs for copy-offload support
Add infrastructure for copy offload in block and request layer.
block: add copy offload support
block: add emulation for copy
fs, block: copy_file_range for def_blk_ops for direct block device
nvme: add copy offload support
nvmet: add copy command support for bdev and file ns
dm: Add support for copy offload
dm: Enable copy offload for dm-linear target
null: Enable trace capability for null block
null_blk: add support for copy offload

Documentation/ABI/stable/sysfs-block | 23 ++
Documentation/block/null_blk.rst | 5 +
block/blk-core.c | 7 +
block/blk-lib.c | 424 +++++++++++++++++++++++++++
block/blk-merge.c | 41 +++
block/blk-settings.c | 24 ++
block/blk-sysfs.c | 36 +++
block/blk.h | 16 +
block/elevator.h | 1 +
block/fops.c | 25 ++
drivers/block/null_blk/Makefile | 2 -
drivers/block/null_blk/main.c | 100 ++++++-
drivers/block/null_blk/null_blk.h | 1 +
drivers/block/null_blk/trace.h | 25 ++
drivers/block/null_blk/zoned.c | 1 -
drivers/md/dm-linear.c | 1 +
drivers/md/dm-table.c | 37 +++
drivers/md/dm.c | 7 +
drivers/nvme/host/constants.c | 1 +
drivers/nvme/host/core.c | 79 +++++
drivers/nvme/host/trace.c | 19 ++
drivers/nvme/target/admin-cmd.c | 9 +-
drivers/nvme/target/io-cmd-bdev.c | 71 +++++
drivers/nvme/target/io-cmd-file.c | 50 ++++
drivers/nvme/target/nvmet.h | 1 +
drivers/nvme/target/trace.c | 19 ++
fs/read_write.c | 8 +-
include/linux/bio.h | 6 +-
include/linux/blk_types.h | 10 +
include/linux/blkdev.h | 22 ++
include/linux/device-mapper.h | 3 +
include/linux/nvme.h | 43 ++-
32 files changed, 1098 insertions(+), 19 deletions(-)


base-commit: 7fc7222d9680366edeecc219c21ca96310bdbc10
--
2.35.1.500.gb896f729e2


2023-09-20 16:21:17

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 04/12] block: add emulation for copy

For the devices which does not support copy, copy emulation is added.
It is required for in-kernel users like fabrics, where file descriptor is
not available and hence they can't use copy_file_range.
Copy-emulation is implemented by reading from source into memory and
writing to the corresponding destination.
At present in kernel user of emulation is fabrics.

Signed-off-by: Nitesh Shetty <[email protected]>
Signed-off-by: Vincent Fu <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
---
block/blk-lib.c | 223 +++++++++++++++++++++++++++++++++++++++++
include/linux/blkdev.h | 4 +
2 files changed, 227 insertions(+)

diff --git a/block/blk-lib.c b/block/blk-lib.c
index 50d10fa3c4c5..da3594d25a3f 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -26,6 +26,20 @@ struct blkdev_copy_offload_io {
loff_t offset;
};

+/* Keeps track of single outstanding copy emulation IO */
+struct blkdev_copy_emulation_io {
+ struct blkdev_copy_io *cio;
+ struct work_struct emulation_work;
+ void *buf;
+ ssize_t buf_len;
+ loff_t pos_in;
+ loff_t pos_out;
+ ssize_t len;
+ struct block_device *bdev_in;
+ struct block_device *bdev_out;
+ gfp_t gfp;
+};
+
static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
{
unsigned int discard_granularity = bdev_discard_granularity(bdev);
@@ -316,6 +330,215 @@ ssize_t blkdev_copy_offload(struct block_device *bdev, loff_t pos_in,
}
EXPORT_SYMBOL_GPL(blkdev_copy_offload);

+static void *blkdev_copy_alloc_buf(ssize_t req_size, ssize_t *alloc_size,
+ gfp_t gfp)
+{
+ int min_size = PAGE_SIZE;
+ char *buf;
+
+ while (req_size >= min_size) {
+ buf = kvmalloc(req_size, gfp);
+ if (buf) {
+ *alloc_size = req_size;
+ return buf;
+ }
+ req_size >>= 1;
+ }
+
+ return NULL;
+}
+
+static struct bio *bio_map_buf(void *data, unsigned int len, gfp_t gfp)
+{
+ unsigned long kaddr = (unsigned long)data;
+ unsigned long end = (kaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ unsigned long start = kaddr >> PAGE_SHIFT;
+ const int nr_pages = end - start;
+ bool is_vmalloc = is_vmalloc_addr(data);
+ struct page *page;
+ int offset, i;
+ struct bio *bio;
+
+ bio = bio_kmalloc(nr_pages, gfp);
+ if (!bio)
+ return ERR_PTR(-ENOMEM);
+ bio_init(bio, NULL, bio->bi_inline_vecs, nr_pages, 0);
+
+ if (is_vmalloc) {
+ flush_kernel_vmap_range(data, len);
+ bio->bi_private = data;
+ }
+
+ offset = offset_in_page(kaddr);
+ for (i = 0; i < nr_pages; i++) {
+ unsigned int bytes = PAGE_SIZE - offset;
+
+ if (len <= 0)
+ break;
+
+ if (bytes > len)
+ bytes = len;
+
+ if (!is_vmalloc)
+ page = virt_to_page(data);
+ else
+ page = vmalloc_to_page(data);
+ if (bio_add_page(bio, page, bytes, offset) < bytes) {
+ /* we don't support partial mappings */
+ bio_uninit(bio);
+ kfree(bio);
+ return ERR_PTR(-EINVAL);
+ }
+
+ data += bytes;
+ len -= bytes;
+ offset = 0;
+ }
+
+ return bio;
+}
+
+static void blkdev_copy_emulation_work(struct work_struct *work)
+{
+ struct blkdev_copy_emulation_io *emulation_io = container_of(work,
+ struct blkdev_copy_emulation_io, emulation_work);
+ struct blkdev_copy_io *cio = emulation_io->cio;
+ struct bio *read_bio, *write_bio;
+ loff_t pos_in = emulation_io->pos_in, pos_out = emulation_io->pos_out;
+ ssize_t rem, chunk;
+ int ret = 0;
+
+ for (rem = emulation_io->len; rem > 0; rem -= chunk) {
+ chunk = min_t(int, emulation_io->buf_len, rem);
+
+ read_bio = bio_map_buf(emulation_io->buf,
+ emulation_io->buf_len,
+ emulation_io->gfp);
+ if (IS_ERR(read_bio)) {
+ ret = PTR_ERR(read_bio);
+ break;
+ }
+ read_bio->bi_opf = REQ_OP_READ | REQ_SYNC;
+ bio_set_dev(read_bio, emulation_io->bdev_in);
+ read_bio->bi_iter.bi_sector = pos_in >> SECTOR_SHIFT;
+ read_bio->bi_iter.bi_size = chunk;
+ ret = submit_bio_wait(read_bio);
+ kfree(read_bio);
+ if (ret)
+ break;
+
+ write_bio = bio_map_buf(emulation_io->buf,
+ emulation_io->buf_len,
+ emulation_io->gfp);
+ if (IS_ERR(write_bio)) {
+ ret = PTR_ERR(write_bio);
+ break;
+ }
+ write_bio->bi_opf = REQ_OP_WRITE | REQ_SYNC;
+ bio_set_dev(write_bio, emulation_io->bdev_out);
+ write_bio->bi_iter.bi_sector = pos_out >> SECTOR_SHIFT;
+ write_bio->bi_iter.bi_size = chunk;
+ ret = submit_bio_wait(write_bio);
+ kfree(write_bio);
+ if (ret)
+ break;
+
+ pos_in += chunk;
+ pos_out += chunk;
+ }
+ cio->status = ret;
+ kvfree(emulation_io->buf);
+ kfree(emulation_io);
+ blkdev_copy_endio(cio);
+}
+
+static inline ssize_t queue_max_hw_bytes(struct request_queue *q)
+{
+ return min_t(ssize_t, queue_max_hw_sectors(q) << SECTOR_SHIFT,
+ queue_max_segments(q) << PAGE_SHIFT);
+}
+/*
+ * @bdev_in: source block device
+ * @pos_in: source offset
+ * @bdev_out: destination block device
+ * @pos_out: destination offset
+ * @len: length in bytes to be copied
+ * @endio: endio function to be called on completion of copy operation,
+ * for synchronous operation this should be NULL
+ * @private: endio function will be called with this private data,
+ * for synchronous operation this should be NULL
+ * @gfp_mask: memory allocation flags (for bio_alloc)
+ *
+ * For synchronous operation returns the length of bytes copied or error
+ * For asynchronous operation returns -EIOCBQUEUED or error
+ *
+ * Description:
+ * If native copy offload feature is absent, caller can use this function
+ * to perform copy.
+ * We store information required to perform the copy along with temporary
+ * buffer allocation. We async punt copy emulation to a worker. And worker
+ * performs copy in 2 steps.
+ * 1. Read data from source to temporary buffer
+ * 2. Write data to destination from temporary buffer
+ */
+ssize_t blkdev_copy_emulation(struct block_device *bdev_in, loff_t pos_in,
+ struct block_device *bdev_out, loff_t pos_out,
+ size_t len, void (*endio)(void *, int, ssize_t),
+ void *private, gfp_t gfp)
+{
+ struct request_queue *in = bdev_get_queue(bdev_in);
+ struct request_queue *out = bdev_get_queue(bdev_out);
+ struct blkdev_copy_emulation_io *emulation_io;
+ struct blkdev_copy_io *cio;
+ ssize_t ret;
+ size_t max_hw_bytes = min(queue_max_hw_bytes(in),
+ queue_max_hw_bytes(out));
+
+ ret = blkdev_copy_sanity_check(bdev_in, pos_in, bdev_out, pos_out, len);
+ if (ret)
+ return ret;
+
+ cio = kzalloc(sizeof(*cio), GFP_KERNEL);
+ if (!cio)
+ return -ENOMEM;
+
+ cio->waiter = current;
+ cio->copied = len;
+ cio->endio = endio;
+ cio->private = private;
+
+ emulation_io = kzalloc(sizeof(*emulation_io), gfp);
+ if (!emulation_io)
+ goto err_free_cio;
+ emulation_io->cio = cio;
+ INIT_WORK(&emulation_io->emulation_work, blkdev_copy_emulation_work);
+ emulation_io->pos_in = pos_in;
+ emulation_io->pos_out = pos_out;
+ emulation_io->len = len;
+ emulation_io->bdev_in = bdev_in;
+ emulation_io->bdev_out = bdev_out;
+ emulation_io->gfp = gfp;
+
+ emulation_io->buf = blkdev_copy_alloc_buf(min(max_hw_bytes, len),
+ &emulation_io->buf_len, gfp);
+ if (!emulation_io->buf)
+ goto err_free_emulation_io;
+
+ schedule_work(&emulation_io->emulation_work);
+
+ if (cio->endio)
+ return -EIOCBQUEUED;
+
+ return blkdev_copy_wait_io_completion(cio);
+
+err_free_emulation_io:
+ kfree(emulation_io);
+err_free_cio:
+ kfree(cio);
+ return -ENOMEM;
+}
+EXPORT_SYMBOL_GPL(blkdev_copy_emulation);
+
static int __blkdev_issue_write_zeroes(struct block_device *bdev,
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
struct bio **biop, unsigned flags)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5405499bcf22..e0a832a1c3a7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1046,6 +1046,10 @@ ssize_t blkdev_copy_offload(struct block_device *bdev, loff_t pos_in,
loff_t pos_out, size_t len,
void (*endio)(void *, int, ssize_t),
void *private, gfp_t gfp_mask);
+ssize_t blkdev_copy_emulation(struct block_device *bdev_in, loff_t pos_in,
+ struct block_device *bdev_out, loff_t pos_out,
+ size_t len, void (*endio)(void *, int, ssize_t),
+ void *private, gfp_t gfp);

#define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */
#define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */
--
2.35.1.500.gb896f729e2

2023-09-20 18:16:22

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 08/12] nvmet: add copy command support for bdev and file ns

Add support for handling nvme_cmd_copy command on target.

For bdev-ns if backing device supports copy offload we call device copy
offload (blkdev_copy_offload).
In case of absence of device copy offload capability, we use copy emulation
(blkdev_copy_emulation)

For file-ns we call vfs_copy_file_range to service our request.

Currently target always shows copy capability by setting
NVME_CTRL_ONCS_COPY in controller ONCS.

loop target has copy support, which can be used to test copy offload.
trace event support for nvme_cmd_copy.

Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Nitesh Shetty <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
---
drivers/nvme/target/admin-cmd.c | 9 +++-
drivers/nvme/target/io-cmd-bdev.c | 71 +++++++++++++++++++++++++++++++
drivers/nvme/target/io-cmd-file.c | 50 ++++++++++++++++++++++
drivers/nvme/target/nvmet.h | 1 +
drivers/nvme/target/trace.c | 19 +++++++++
5 files changed, 148 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 39cb570f833d..4e1a6ca09937 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -433,8 +433,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES);
id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
id->oncs = cpu_to_le16(NVME_CTRL_ONCS_DSM |
- NVME_CTRL_ONCS_WRITE_ZEROES);
-
+ NVME_CTRL_ONCS_WRITE_ZEROES | NVME_CTRL_ONCS_COPY);
/* XXX: don't report vwc if the underlying device is write through */
id->vwc = NVME_CTRL_VWC_PRESENT;

@@ -536,6 +535,12 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)

if (req->ns->bdev)
nvmet_bdev_set_limits(req->ns->bdev, id);
+ else {
+ id->msrc = (__force u8)to0based(BIO_MAX_VECS - 1);
+ id->mssrl = cpu_to_le16(BIO_MAX_VECS <<
+ (PAGE_SHIFT - SECTOR_SHIFT));
+ id->mcl = cpu_to_le32(le16_to_cpu(id->mssrl));
+ }

/*
* We just provide a single LBA format that matches what the
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 468833675cc9..2d5cef6788be 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -46,6 +46,18 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id)
id->npda = id->npdg;
/* NOWS = Namespace Optimal Write Size */
id->nows = to0based(bdev_io_opt(bdev) / bdev_logical_block_size(bdev));
+
+ if (bdev_max_copy_sectors(bdev)) {
+ id->msrc = id->msrc;
+ id->mssrl = cpu_to_le16((bdev_max_copy_sectors(bdev) <<
+ SECTOR_SHIFT) / bdev_logical_block_size(bdev));
+ id->mcl = cpu_to_le32((__force u32)id->mssrl);
+ } else {
+ id->msrc = (__force u8)to0based(BIO_MAX_VECS - 1);
+ id->mssrl = cpu_to_le16((BIO_MAX_VECS << PAGE_SHIFT) /
+ bdev_logical_block_size(bdev));
+ id->mcl = cpu_to_le32((__force u32)id->mssrl);
+ }
}

void nvmet_bdev_ns_disable(struct nvmet_ns *ns)
@@ -449,6 +461,61 @@ static void nvmet_bdev_execute_write_zeroes(struct nvmet_req *req)
}
}

+static void nvmet_bdev_copy_endio(void *private, int status,
+ ssize_t copied)
+{
+ struct nvmet_req *rq = (struct nvmet_req *)private;
+ u16 nvme_status;
+
+ if (copied == rq->copy_len)
+ rq->cqe->result.u32 = cpu_to_le32(1);
+ else
+ rq->cqe->result.u32 = cpu_to_le32(0);
+
+ nvme_status = errno_to_nvme_status(rq, status);
+ nvmet_req_complete(rq, nvme_status);
+}
+
+/*
+ * At present we handle only one range entry, since copy offload is aligned with
+ * copy_file_range, only one entry is passed from block layer.
+ */
+static void nvmet_bdev_execute_copy(struct nvmet_req *rq)
+{
+ struct nvme_copy_range range;
+ struct nvme_command *cmd = rq->cmd;
+ ssize_t ret;
+ off_t dst, src;
+
+ u16 status;
+
+ status = nvmet_copy_from_sgl(rq, 0, &range, sizeof(range));
+ if (status)
+ goto err_rq_complete;
+
+ dst = le64_to_cpu(cmd->copy.sdlba) << rq->ns->blksize_shift;
+ src = le64_to_cpu(range.slba) << rq->ns->blksize_shift;
+ rq->copy_len = (range.nlb + 1) << rq->ns->blksize_shift;
+
+ if (bdev_max_copy_sectors(rq->ns->bdev)) {
+ ret = blkdev_copy_offload(rq->ns->bdev, dst, src, rq->copy_len,
+ nvmet_bdev_copy_endio,
+ (void *)rq, GFP_KERNEL);
+ } else {
+ ret = blkdev_copy_emulation(rq->ns->bdev, dst,
+ rq->ns->bdev, src, rq->copy_len,
+ nvmet_bdev_copy_endio,
+ (void *)rq, GFP_KERNEL);
+ }
+ if (ret == -EIOCBQUEUED)
+ return;
+
+ rq->cqe->result.u32 = cpu_to_le32(0);
+ status = blk_to_nvme_status(rq, ret);
+err_rq_complete:
+ nvmet_req_complete(rq, status);
+}
+
u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
{
switch (req->cmd->common.opcode) {
@@ -467,6 +534,10 @@ u16 nvmet_bdev_parse_io_cmd(struct nvmet_req *req)
case nvme_cmd_write_zeroes:
req->execute = nvmet_bdev_execute_write_zeroes;
return 0;
+ case nvme_cmd_copy:
+ req->execute = nvmet_bdev_execute_copy;
+ return 0;
+
default:
return nvmet_report_invalid_opcode(req);
}
diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
index 2d068439b129..4524cfffa4c6 100644
--- a/drivers/nvme/target/io-cmd-file.c
+++ b/drivers/nvme/target/io-cmd-file.c
@@ -322,6 +322,47 @@ static void nvmet_file_dsm_work(struct work_struct *w)
}
}

+static void nvmet_file_copy_work(struct work_struct *w)
+{
+ struct nvmet_req *req = container_of(w, struct nvmet_req, f.work);
+ int nr_range = req->cmd->copy.nr_range + 1;
+ u16 status = 0;
+ int src, id;
+ ssize_t len, ret;
+ loff_t pos;
+
+ pos = le64_to_cpu(req->cmd->copy.sdlba) << req->ns->blksize_shift;
+ if (unlikely(pos + req->transfer_len > req->ns->size)) {
+ nvmet_req_complete(req, errno_to_nvme_status(req, -ENOSPC));
+ return;
+ }
+
+ for (id = 0 ; id < nr_range; id++) {
+ struct nvme_copy_range range;
+
+ status = nvmet_copy_from_sgl(req, id * sizeof(range), &range,
+ sizeof(range));
+ if (status)
+ break;
+
+ src = (le64_to_cpu(range.slba) << (req->ns->blksize_shift));
+ len = (le16_to_cpu(range.nlb) + 1) << (req->ns->blksize_shift);
+ ret = vfs_copy_file_range(req->ns->file, src, req->ns->file,
+ pos, len, 0);
+ pos += ret;
+ if (ret != len) {
+ req->cqe->result.u32 = cpu_to_le32(id);
+ if (ret < 0)
+ status = errno_to_nvme_status(req, ret);
+ else
+ status = errno_to_nvme_status(req, -EIO);
+ break;
+ }
+ }
+
+ nvmet_req_complete(req, status);
+}
+
static void nvmet_file_execute_dsm(struct nvmet_req *req)
{
if (!nvmet_check_data_len_lte(req, nvmet_dsm_len(req)))
@@ -330,6 +371,12 @@ static void nvmet_file_execute_dsm(struct nvmet_req *req)
queue_work(nvmet_wq, &req->f.work);
}

+static void nvmet_file_execute_copy(struct nvmet_req *req)
+{
+ INIT_WORK(&req->f.work, nvmet_file_copy_work);
+ queue_work(nvmet_wq, &req->f.work);
+}
+
static void nvmet_file_write_zeroes_work(struct work_struct *w)
{
struct nvmet_req *req = container_of(w, struct nvmet_req, f.work);
@@ -376,6 +423,9 @@ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req)
case nvme_cmd_write_zeroes:
req->execute = nvmet_file_execute_write_zeroes;
return 0;
+ case nvme_cmd_copy:
+ req->execute = nvmet_file_execute_copy;
+ return 0;
default:
return nvmet_report_invalid_opcode(req);
}
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 8cfd60f3b564..395f3af28413 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -393,6 +393,7 @@ struct nvmet_req {
struct device *p2p_client;
u16 error_loc;
u64 error_slba;
+ size_t copy_len;
};

#define NVMET_MAX_MPOOL_BVEC 16
diff --git a/drivers/nvme/target/trace.c b/drivers/nvme/target/trace.c
index bff454d46255..551fdf029381 100644
--- a/drivers/nvme/target/trace.c
+++ b/drivers/nvme/target/trace.c
@@ -92,6 +92,23 @@ static const char *nvmet_trace_dsm(struct trace_seq *p, u8 *cdw10)
return ret;
}

+static const char *nvmet_trace_copy(struct trace_seq *p, u8 *cdw10)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+ u64 sdlba = get_unaligned_le64(cdw10);
+ u8 nr_range = get_unaligned_le16(cdw10 + 8);
+ u16 control = get_unaligned_le16(cdw10 + 10);
+ u32 dsmgmt = get_unaligned_le32(cdw10 + 12);
+ u32 reftag = get_unaligned_le32(cdw10 + 16);
+
+ trace_seq_printf(p,
+ "sdlba=%llu, nr_range=%u, ctrl=1x%x, dsmgmt=%u, reftag=%u",
+ sdlba, nr_range, control, dsmgmt, reftag);
+ trace_seq_putc(p, 0);
+
+ return ret;
+}
+
static const char *nvmet_trace_common(struct trace_seq *p, u8 *cdw10)
{
const char *ret = trace_seq_buffer_ptr(p);
@@ -129,6 +146,8 @@ const char *nvmet_trace_parse_nvm_cmd(struct trace_seq *p,
return nvmet_trace_read_write(p, cdw10);
case nvme_cmd_dsm:
return nvmet_trace_dsm(p, cdw10);
+ case nvme_cmd_copy:
+ return nvmet_trace_copy(p, cdw10);
default:
return nvmet_trace_common(p, cdw10);
}
--
2.35.1.500.gb896f729e2

2023-09-20 19:29:04

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 06/12] fs, block: copy_file_range for def_blk_ops for direct block device

For direct block device opened with O_DIRECT, use copy_file_range to
issue device copy offload, or use generic_copy_file_range in case
device copy offload capability is absent or the device files are not open
with O_DIRECT.

Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
Signed-off-by: Nitesh Shetty <[email protected]>
---
block/fops.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)

diff --git a/block/fops.c b/block/fops.c
index acff3d5d22d4..6aa537c0e24f 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -735,6 +735,30 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
return ret;
}

+static ssize_t blkdev_copy_file_range(struct file *file_in, loff_t pos_in,
+ struct file *file_out, loff_t pos_out,
+ size_t len, unsigned int flags)
+{
+ struct block_device *in_bdev = I_BDEV(bdev_file_inode(file_in));
+ struct block_device *out_bdev = I_BDEV(bdev_file_inode(file_out));
+ ssize_t copied = 0;
+
+ if ((in_bdev == out_bdev) && bdev_max_copy_sectors(in_bdev) &&
+ (file_in->f_iocb_flags & IOCB_DIRECT) &&
+ (file_out->f_iocb_flags & IOCB_DIRECT)) {
+ copied = blkdev_copy_offload(in_bdev, pos_in, pos_out, len,
+ NULL, NULL, GFP_KERNEL);
+ if (copied < 0)
+ copied = 0;
+ } else {
+ copied = generic_copy_file_range(file_in, pos_in + copied,
+ file_out, pos_out + copied,
+ len - copied, flags);
+ }
+
+ return copied;
+}
+
#define BLKDEV_FALLOC_FL_SUPPORTED \
(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | \
FALLOC_FL_ZERO_RANGE | FALLOC_FL_NO_HIDE_STALE)
@@ -828,6 +852,7 @@ const struct file_operations def_blk_fops = {
.splice_read = filemap_splice_read,
.splice_write = iter_file_splice_write,
.fallocate = blkdev_fallocate,
+ .copy_file_range = blkdev_copy_file_range,
};

static __init int blkdev_init(void)
--
2.35.1.500.gb896f729e2

2023-09-20 19:31:54

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 12/12] null_blk: add support for copy offload

Implementation is based on existing read and write infrastructure.
copy_max_bytes: A new configfs and module parameter is introduced, which
can be used to set hardware/driver supported maximum copy limit.
Only request based queue mode will support for copy offload.
Added tracefs support to copy IO tracing.

Reviewed-by: Hannes Reinecke <[email protected]>
Suggested-by: Damien Le Moal <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
Signed-off-by: Nitesh Shetty <[email protected]>
Signed-off-by: Vincent Fu <[email protected]>
---
Documentation/block/null_blk.rst | 5 ++
drivers/block/null_blk/main.c | 97 ++++++++++++++++++++++++++++++-
drivers/block/null_blk/null_blk.h | 1 +
drivers/block/null_blk/trace.h | 23 ++++++++
4 files changed, 123 insertions(+), 3 deletions(-)

diff --git a/Documentation/block/null_blk.rst b/Documentation/block/null_blk.rst
index 4dd78f24d10a..6153e02fcf13 100644
--- a/Documentation/block/null_blk.rst
+++ b/Documentation/block/null_blk.rst
@@ -149,3 +149,8 @@ zone_size=[MB]: Default: 256
zone_nr_conv=[nr_conv]: Default: 0
The number of conventional zones to create when block device is zoned. If
zone_nr_conv >= nr_zones, it will be reduced to nr_zones - 1.
+
+copy_max_bytes=[size in bytes]: Default: COPY_MAX_BYTES
+ A module and configfs parameter which can be used to set hardware/driver
+ supported maximum copy offload limit.
+ COPY_MAX_BYTES(=128MB at present) is defined in fs.h
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index c56bef0edc5e..22361f4d5f71 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -160,6 +160,10 @@ static int g_max_sectors;
module_param_named(max_sectors, g_max_sectors, int, 0444);
MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)");

+static unsigned long g_copy_max_bytes = BLK_COPY_MAX_BYTES;
+module_param_named(copy_max_bytes, g_copy_max_bytes, ulong, 0444);
+MODULE_PARM_DESC(copy_max_bytes, "Maximum size of a copy command (in bytes)");
+
static unsigned int nr_devices = 1;
module_param(nr_devices, uint, 0444);
MODULE_PARM_DESC(nr_devices, "Number of devices to register");
@@ -412,6 +416,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL);
NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
NULLB_DEVICE_ATTR(blocksize, uint, NULL);
NULLB_DEVICE_ATTR(max_sectors, uint, NULL);
+NULLB_DEVICE_ATTR(copy_max_bytes, uint, NULL);
NULLB_DEVICE_ATTR(irqmode, uint, NULL);
NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
NULLB_DEVICE_ATTR(index, uint, NULL);
@@ -553,6 +558,7 @@ static struct configfs_attribute *nullb_device_attrs[] = {
&nullb_device_attr_queue_mode,
&nullb_device_attr_blocksize,
&nullb_device_attr_max_sectors,
+ &nullb_device_attr_copy_max_bytes,
&nullb_device_attr_irqmode,
&nullb_device_attr_hw_queue_depth,
&nullb_device_attr_index,
@@ -659,7 +665,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page)
"poll_queues,power,queue_mode,shared_tag_bitmap,size,"
"submit_queues,use_per_node_hctx,virt_boundary,zoned,"
"zone_capacity,zone_max_active,zone_max_open,"
- "zone_nr_conv,zone_offline,zone_readonly,zone_size\n");
+ "zone_nr_conv,zone_offline,zone_readonly,zone_size,"
+ "copy_max_bytes\n");
}

CONFIGFS_ATTR_RO(memb_group_, features);
@@ -725,6 +732,7 @@ static struct nullb_device *null_alloc_dev(void)
dev->queue_mode = g_queue_mode;
dev->blocksize = g_bs;
dev->max_sectors = g_max_sectors;
+ dev->copy_max_bytes = g_copy_max_bytes;
dev->irqmode = g_irqmode;
dev->hw_queue_depth = g_hw_queue_depth;
dev->blocking = g_blocking;
@@ -1274,6 +1282,81 @@ static int null_transfer(struct nullb *nullb, struct page *page,
return err;
}

+static inline int nullb_setup_copy(struct nullb *nullb, struct request *req,
+ bool is_fua)
+{
+ sector_t sector_in = 0, sector_out = 0;
+ loff_t offset_in, offset_out;
+ void *in, *out;
+ ssize_t chunk, rem = 0;
+ struct bio *bio;
+ struct nullb_page *t_page_in, *t_page_out;
+ u16 seg = 1;
+ int status = -EIO;
+
+ if (blk_rq_nr_phys_segments(req) != BLK_COPY_MAX_SEGMENTS)
+ return status;
+
+ /*
+ * First bio contains information about source and last bio contains
+ * information about destination.
+ */
+ __rq_for_each_bio(bio, req) {
+ if (seg == blk_rq_nr_phys_segments(req)) {
+ sector_out = bio->bi_iter.bi_sector;
+ if (rem != bio->bi_iter.bi_size)
+ return status;
+ } else {
+ sector_in = bio->bi_iter.bi_sector;
+ rem = bio->bi_iter.bi_size;
+ }
+ seg++;
+ }
+
+ trace_nullb_copy_op(req, sector_out << SECTOR_SHIFT,
+ sector_in << SECTOR_SHIFT, rem);
+
+ spin_lock_irq(&nullb->lock);
+ while (rem > 0) {
+ chunk = min_t(size_t, nullb->dev->blocksize, rem);
+ offset_in = (sector_in & SECTOR_MASK) << SECTOR_SHIFT;
+ offset_out = (sector_out & SECTOR_MASK) << SECTOR_SHIFT;
+
+ if (null_cache_active(nullb) && !is_fua)
+ null_make_cache_space(nullb, PAGE_SIZE);
+
+ t_page_in = null_lookup_page(nullb, sector_in, false,
+ !null_cache_active(nullb));
+ if (!t_page_in)
+ goto err;
+ t_page_out = null_insert_page(nullb, sector_out,
+ !null_cache_active(nullb) ||
+ is_fua);
+ if (!t_page_out)
+ goto err;
+
+ in = kmap_local_page(t_page_in->page);
+ out = kmap_local_page(t_page_out->page);
+
+ memcpy(out + offset_out, in + offset_in, chunk);
+ kunmap_local(out);
+ kunmap_local(in);
+ __set_bit(sector_out & SECTOR_MASK, t_page_out->bitmap);
+
+ if (is_fua)
+ null_free_sector(nullb, sector_out, true);
+
+ rem -= chunk;
+ sector_in += chunk >> SECTOR_SHIFT;
+ sector_out += chunk >> SECTOR_SHIFT;
+ }
+
+ status = 0;
+err:
+ spin_unlock_irq(&nullb->lock);
+ return status;
+}
+
static int null_handle_rq(struct nullb_cmd *cmd)
{
struct request *rq = cmd->rq;
@@ -1283,13 +1366,16 @@ static int null_handle_rq(struct nullb_cmd *cmd)
sector_t sector = blk_rq_pos(rq);
struct req_iterator iter;
struct bio_vec bvec;
+ bool fua = rq->cmd_flags & REQ_FUA;
+
+ if (op_is_copy(req_op(rq)))
+ return nullb_setup_copy(nullb, rq, fua);

spin_lock_irq(&nullb->lock);
rq_for_each_segment(bvec, rq, iter) {
len = bvec.bv_len;
err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
- op_is_write(req_op(rq)), sector,
- rq->cmd_flags & REQ_FUA);
+ op_is_write(req_op(rq)), sector, fua);
if (err) {
spin_unlock_irq(&nullb->lock);
return err;
@@ -2053,6 +2139,9 @@ static int null_validate_conf(struct nullb_device *dev)
return -EINVAL;
}

+ if (dev->queue_mode == NULL_Q_BIO)
+ dev->copy_max_bytes = 0;
+
return 0;
}

@@ -2172,6 +2261,8 @@ static int null_add_dev(struct nullb_device *dev)
dev->max_sectors = queue_max_hw_sectors(nullb->q);
dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS);
blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
+ blk_queue_max_copy_hw_sectors(nullb->q,
+ dev->copy_max_bytes >> SECTOR_SHIFT);

if (dev->virt_boundary)
blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h
index 929f659dd255..e82e53a2e2df 100644
--- a/drivers/block/null_blk/null_blk.h
+++ b/drivers/block/null_blk/null_blk.h
@@ -107,6 +107,7 @@ struct nullb_device {
unsigned int queue_mode; /* block interface */
unsigned int blocksize; /* block size */
unsigned int max_sectors; /* Max sectors per command */
+ unsigned long copy_max_bytes; /* Max copy offload length in bytes */
unsigned int irqmode; /* IRQ completion handler */
unsigned int hw_queue_depth; /* queue depth */
unsigned int index; /* index of the disk, only valid with a disk */
diff --git a/drivers/block/null_blk/trace.h b/drivers/block/null_blk/trace.h
index 91446c34eac2..2f2c1d1c2b48 100644
--- a/drivers/block/null_blk/trace.h
+++ b/drivers/block/null_blk/trace.h
@@ -70,6 +70,29 @@ TRACE_EVENT(nullb_report_zones,
);
#endif /* CONFIG_BLK_DEV_ZONED */

+TRACE_EVENT(nullb_copy_op,
+ TP_PROTO(struct request *req,
+ sector_t dst, sector_t src, size_t len),
+ TP_ARGS(req, dst, src, len),
+ TP_STRUCT__entry(
+ __array(char, disk, DISK_NAME_LEN)
+ __field(enum req_op, op)
+ __field(sector_t, dst)
+ __field(sector_t, src)
+ __field(size_t, len)
+ ),
+ TP_fast_assign(
+ __entry->op = req_op(req);
+ __assign_disk_name(__entry->disk, req->q->disk);
+ __entry->dst = dst;
+ __entry->src = src;
+ __entry->len = len;
+ ),
+ TP_printk("%s req=%-15s: dst=%llu, src=%llu, len=%lu",
+ __print_disk_name(__entry->disk),
+ blk_op_str(__entry->op),
+ __entry->dst, __entry->src, __entry->len)
+);
#endif /* _TRACE_NULLB_H */

#undef TRACE_INCLUDE_PATH
--
2.35.1.500.gb896f729e2

2023-09-20 20:12:35

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 05/12] fs/read_write: Enable copy_file_range for block device.

From: Anuj Gupta <[email protected]>

This is a prep patch. Allow copy_file_range to work for block devices.
Relaxing generic_copy_file_checks allows us to reuse the existing infra,
instead of adding a new user interface for block copy offload.
Change generic_copy_file_checks to use ->f_mapping->host for both inode_in
and inode_out. Allow block device in generic_file_rw_checks.

Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
Signed-off-by: Nitesh Shetty <[email protected]>
---
fs/read_write.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/fs/read_write.c b/fs/read_write.c
index 4771701c896b..f0f52bf48f57 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -1405,8 +1405,8 @@ static int generic_copy_file_checks(struct file *file_in, loff_t pos_in,
struct file *file_out, loff_t pos_out,
size_t *req_count, unsigned int flags)
{
- struct inode *inode_in = file_inode(file_in);
- struct inode *inode_out = file_inode(file_out);
+ struct inode *inode_in = file_in->f_mapping->host;
+ struct inode *inode_out = file_out->f_mapping->host;
uint64_t count = *req_count;
loff_t size_in;
int ret;
@@ -1708,7 +1708,9 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out)
/* Don't copy dirs, pipes, sockets... */
if (S_ISDIR(inode_in->i_mode) || S_ISDIR(inode_out->i_mode))
return -EISDIR;
- if (!S_ISREG(inode_in->i_mode) || !S_ISREG(inode_out->i_mode))
+ if (!S_ISREG(inode_in->i_mode) && !S_ISBLK(inode_in->i_mode))
+ return -EINVAL;
+ if ((inode_in->i_mode & S_IFMT) != (inode_out->i_mode & S_IFMT))
return -EINVAL;

if (!(file_in->f_mode & FMODE_READ) ||
--
2.35.1.500.gb896f729e2

2023-09-20 20:51:14

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 10/12] dm: Enable copy offload for dm-linear target

Setting copy_offload_supported flag to enable offload.

Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Nitesh Shetty <[email protected]>
---
drivers/md/dm-linear.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index f4448d520ee9..1d1ee30bbefb 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -62,6 +62,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->num_discard_bios = 1;
ti->num_secure_erase_bios = 1;
ti->num_write_zeroes_bios = 1;
+ ti->copy_offload_supported = 1;
ti->private = lc;
return 0;

--
2.35.1.500.gb896f729e2

2023-09-21 07:20:40

by Nitesh Shetty

[permalink] [raw]
Subject: [PATCH v16 03/12] block: add copy offload support

Introduce blkdev_copy_offload to perform copy offload.
Issue REQ_OP_COPY_SRC with source info along with taking a plug.
This flows till request layer and waits for dst bio to arrive.
Issue REQ_OP_COPY_DST with destination info and this bio reaches request
layer and merges with src request.
For any reason, if a request comes to the driver with only one of src/dst
bio, we fail the copy offload.

Larger copy will be divided, based on max_copy_sectors limit.

Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Anuj Gupta <[email protected]>
Signed-off-by: Nitesh Shetty <[email protected]>
---
block/blk-lib.c | 201 +++++++++++++++++++++++++++++++++++++++++
include/linux/blkdev.h | 4 +
2 files changed, 205 insertions(+)

diff --git a/block/blk-lib.c b/block/blk-lib.c
index e59c3069e835..50d10fa3c4c5 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -10,6 +10,22 @@

#include "blk.h"

+/* Keeps track of all outstanding copy IO */
+struct blkdev_copy_io {
+ atomic_t refcount;
+ ssize_t copied;
+ int status;
+ struct task_struct *waiter;
+ void (*endio)(void *private, int status, ssize_t copied);
+ void *private;
+};
+
+/* Keeps track of single outstanding copy offload IO */
+struct blkdev_copy_offload_io {
+ struct blkdev_copy_io *cio;
+ loff_t offset;
+};
+
static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
{
unsigned int discard_granularity = bdev_discard_granularity(bdev);
@@ -115,6 +131,191 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
}
EXPORT_SYMBOL(blkdev_issue_discard);

+static inline ssize_t blkdev_copy_sanity_check(struct block_device *bdev_in,
+ loff_t pos_in,
+ struct block_device *bdev_out,
+ loff_t pos_out, size_t len)
+{
+ unsigned int align = max(bdev_logical_block_size(bdev_out),
+ bdev_logical_block_size(bdev_in)) - 1;
+
+ if ((pos_in & align) || (pos_out & align) || (len & align) || !len ||
+ len >= BLK_COPY_MAX_BYTES)
+ return -EINVAL;
+
+ return 0;
+}
+
+static inline void blkdev_copy_endio(struct blkdev_copy_io *cio)
+{
+ if (cio->endio) {
+ cio->endio(cio->private, cio->status, cio->copied);
+ kfree(cio);
+ } else {
+ struct task_struct *waiter = cio->waiter;
+
+ WRITE_ONCE(cio->waiter, NULL);
+ blk_wake_io_task(waiter);
+ }
+}
+
+/*
+ * This must only be called once all bios have been issued so that the refcount
+ * can only decrease. This just waits for all bios to complete.
+ * Returns the length of bytes copied or error
+ */
+static ssize_t blkdev_copy_wait_io_completion(struct blkdev_copy_io *cio)
+{
+ ssize_t ret;
+
+ for (;;) {
+ __set_current_state(TASK_UNINTERRUPTIBLE);
+ if (!READ_ONCE(cio->waiter))
+ break;
+ blk_io_schedule();
+ }
+ __set_current_state(TASK_RUNNING);
+ ret = cio->copied;
+ kfree(cio);
+
+ return ret;
+}
+
+static void blkdev_copy_offload_dst_endio(struct bio *bio)
+{
+ struct blkdev_copy_offload_io *offload_io = bio->bi_private;
+ struct blkdev_copy_io *cio = offload_io->cio;
+
+ if (bio->bi_status) {
+ cio->copied = min_t(ssize_t, offload_io->offset, cio->copied);
+ if (!cio->status)
+ cio->status = blk_status_to_errno(bio->bi_status);
+ }
+ bio_put(bio);
+
+ if (atomic_dec_and_test(&cio->refcount))
+ blkdev_copy_endio(cio);
+}
+
+/*
+ * @bdev: block device
+ * @pos_in: source offset
+ * @pos_out: destination offset
+ * @len: length in bytes to be copied
+ * @endio: endio function to be called on completion of copy operation,
+ * for synchronous operation this should be NULL
+ * @private: endio function will be called with this private data,
+ * for synchronous operation this should be NULL
+ * @gfp_mask: memory allocation flags (for bio_alloc)
+ *
+ * For synchronous operation returns the length of bytes copied or error
+ * For asynchronous operation returns -EIOCBQUEUED or error
+ *
+ * Description:
+ * Copy source offset to destination offset within block device, using
+ * device's native copy offload feature.
+ * We perform copy operation using 2 bio's.
+ * 1. We take a plug and send a REQ_OP_COPY_SRC bio along with source
+ * sector and length. Once this bio reaches request layer, we form a
+ * request and wait for dst bio to arrive.
+ * 2. We issue REQ_OP_COPY_DST bio along with destination sector, length.
+ * Once this bio reaches request layer and find a request with previously
+ * sent source info we merge the destination bio and return.
+ * 3. Release the plug and request is sent to driver
+ * This design works only for drivers with request queue.
+ */
+ssize_t blkdev_copy_offload(struct block_device *bdev, loff_t pos_in,
+ loff_t pos_out, size_t len,
+ void (*endio)(void *, int, ssize_t),
+ void *private, gfp_t gfp)
+{
+ struct blkdev_copy_io *cio;
+ struct blkdev_copy_offload_io *offload_io;
+ struct bio *src_bio, *dst_bio;
+ ssize_t rem, chunk, ret;
+ ssize_t max_copy_bytes = bdev_max_copy_sectors(bdev) << SECTOR_SHIFT;
+ struct blk_plug plug;
+
+ if (!max_copy_bytes)
+ return -EOPNOTSUPP;
+
+ ret = blkdev_copy_sanity_check(bdev, pos_in, bdev, pos_out, len);
+ if (ret)
+ return ret;
+
+ cio = kzalloc(sizeof(*cio), GFP_KERNEL);
+ if (!cio)
+ return -ENOMEM;
+ atomic_set(&cio->refcount, 1);
+ cio->waiter = current;
+ cio->endio = endio;
+ cio->private = private;
+
+ /*
+ * If there is a error, copied will be set to least successfully
+ * completed copied length
+ */
+ cio->copied = len;
+ for (rem = len; rem > 0; rem -= chunk) {
+ chunk = min(rem, max_copy_bytes);
+
+ offload_io = kzalloc(sizeof(*offload_io), GFP_KERNEL);
+ if (!offload_io)
+ goto err_free_cio;
+ offload_io->cio = cio;
+ /*
+ * For partial completion, we use offload_io->offset to truncate
+ * successful copy length
+ */
+ offload_io->offset = len - rem;
+
+ src_bio = bio_alloc(bdev, 0, REQ_OP_COPY_SRC, gfp);
+ if (!src_bio)
+ goto err_free_offload_io;
+ src_bio->bi_iter.bi_size = chunk;
+ src_bio->bi_iter.bi_sector = pos_in >> SECTOR_SHIFT;
+
+ blk_start_plug(&plug);
+ dst_bio = blk_next_bio(src_bio, bdev, 0, REQ_OP_COPY_DST, gfp);
+ if (!dst_bio)
+ goto err_free_src_bio;
+ dst_bio->bi_iter.bi_size = chunk;
+ dst_bio->bi_iter.bi_sector = pos_out >> SECTOR_SHIFT;
+ dst_bio->bi_end_io = blkdev_copy_offload_dst_endio;
+ dst_bio->bi_private = offload_io;
+
+ atomic_inc(&cio->refcount);
+ submit_bio(dst_bio);
+ blk_finish_plug(&plug);
+ pos_in += chunk;
+ pos_out += chunk;
+ }
+
+ if (atomic_dec_and_test(&cio->refcount))
+ blkdev_copy_endio(cio);
+ if (cio->endio)
+ return -EIOCBQUEUED;
+
+ return blkdev_copy_wait_io_completion(cio);
+
+err_free_src_bio:
+ bio_put(src_bio);
+err_free_offload_io:
+ kfree(offload_io);
+err_free_cio:
+ cio->copied = min_t(ssize_t, cio->copied, (len - rem));
+ cio->status = -ENOMEM;
+ if (rem == len) {
+ kfree(cio);
+ return cio->status;
+ }
+ if (cio->endio)
+ return cio->status;
+
+ return blkdev_copy_wait_io_completion(cio);
+}
+EXPORT_SYMBOL_GPL(blkdev_copy_offload);
+
static int __blkdev_issue_write_zeroes(struct block_device *bdev,
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
struct bio **biop, unsigned flags)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7548f1685ee9..5405499bcf22 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1042,6 +1042,10 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop);
int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp);
+ssize_t blkdev_copy_offload(struct block_device *bdev, loff_t pos_in,
+ loff_t pos_out, size_t len,
+ void (*endio)(void *, int, ssize_t),
+ void *private, gfp_t gfp_mask);

#define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */
#define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */
--
2.35.1.500.gb896f729e2

2023-09-22 13:11:22

by Jinyoung Choi

[permalink] [raw]
Subject: RE: [PATCH v16 04/12] block: add emulation for copy

> +static void blkdev_copy_emulation_work(struct work_struct *work)
> +{
> +        struct blkdev_copy_emulation_io *emulation_io = container_of(work,
> +                        struct blkdev_copy_emulation_io, emulation_work);
> +        struct blkdev_copy_io *cio = emulation_io->cio;
> +        struct bio *read_bio, *write_bio;
> +        loff_t pos_in = emulation_io->pos_in, pos_out = emulation_io->pos_out;
> +        ssize_t rem, chunk;
> +        int ret = 0;
> +
> +        for (rem = emulation_io->len; rem > 0; rem -= chunk) {
> +                chunk = min_t(int, emulation_io->buf_len, rem);
> +
> +                read_bio = bio_map_buf(emulation_io->buf,
> +                                       emulation_io->buf_len,
> +                                       emulation_io->gfp);
> +                if (IS_ERR(read_bio)) {
> +                        ret = PTR_ERR(read_bio);
> +                        break;
> +                }
> +                read_bio->bi_opf = REQ_OP_READ | REQ_SYNC;
> +                bio_set_dev(read_bio, emulation_io->bdev_in);
> +                read_bio->bi_iter.bi_sector = pos_in >> SECTOR_SHIFT;
> +                read_bio->bi_iter.bi_size = chunk;
> +                ret = submit_bio_wait(read_bio);
> +                kfree(read_bio);

Hi, Nitesh,

blk_mq_map_bio_put(read_bio)?
or bio_uninit(read_bio); kfree(read_bio)?

> +                if (ret)
> +                        break;
> +
> +                write_bio = bio_map_buf(emulation_io->buf,
> +                                        emulation_io->buf_len,
> +                                        emulation_io->gfp);
> +                if (IS_ERR(write_bio)) {
> +                        ret = PTR_ERR(write_bio);
> +                        break;
> +                }
> +                write_bio->bi_opf = REQ_OP_WRITE | REQ_SYNC;
> +                bio_set_dev(write_bio, emulation_io->bdev_out);
> +                write_bio->bi_iter.bi_sector = pos_out >> SECTOR_SHIFT;
> +                write_bio->bi_iter.bi_size = chunk;
> +                ret = submit_bio_wait(write_bio);
> +                kfree(write_bio);

blk_mq_map_bio_put(write_bio) ?
or bio_uninit(write_bio); kfree(write_bio)?

hmm...
It continuously allocates and releases memory for bio,
Why don't you just allocate and reuse bio outside the loop?

> +                if (ret)
> +                        break;
> +
> +                pos_in += chunk;
> +                pos_out += chunk;
> +        }
> +        cio->status = ret;
> +        kvfree(emulation_io->buf);
> +        kfree(emulation_io);

I have not usually seen an implementation that releases memory for
itself while performing a worker. ( I don't know what's right. :) )

Since blkdev_copy_emulation() allocates memory for the emulation
and waits for it to be completed, wouldn't it be better to proceed
with the memory release for it in the same context?

That is, IMO, wouldn't it be better to free the memory related to
emulation in blkdev_copy_wait_io_completion()?

Best Regards,
Jinyoung.




2023-09-22 20:03:13

by Jinyoung Choi

[permalink] [raw]
Subject: RE: [PATCH v16 03/12] block: add copy offload support

> +/*
> + * This must only be called once all bios have been issued so that the refcount
> + * can only decrease. This just waits for all bios to complete.
> + * Returns the length of bytes copied or error
> + */
> +static ssize_t blkdev_copy_wait_io_completion(struct blkdev_copy_io *cio)

Hi, Nitesh,

don't functions waiting for completion usually set their names to 'wait_for_completion_'?
(e.g. blkdev_copy_wait_for_completion_io)


> +ssize_t blkdev_copy_offload(struct block_device *bdev, loff_t pos_in,
> +                            loff_t pos_out, size_t len,
> +                            void (*endio)(void *, int, ssize_t),
> +                            void *private, gfp_t gfp)
> +{
> +        struct blkdev_copy_io *cio;
> +        struct blkdev_copy_offload_io *offload_io;
> +        struct bio *src_bio, *dst_bio;
> +        ssize_t rem, chunk, ret;
> +        ssize_t max_copy_bytes = bdev_max_copy_sectors(bdev) << SECTOR_SHIFT;

wouldn't it be better to use size_t for variables that don't return?
values such as chunk and max_copy_bytes may be defined as 'unsigned'.

> +        struct blk_plug plug;
> +
> +        if (!max_copy_bytes)
> +                return -EOPNOTSUPP;
> +
> +        ret = blkdev_copy_sanity_check(bdev, pos_in, bdev, pos_out, len);
> +        if (ret)
> +                return ret;
> +
> +        cio = kzalloc(sizeof(*cio), GFP_KERNEL);
> +        if (!cio)
> +                return -ENOMEM;
> +        atomic_set(&cio->refcount, 1);
> +        cio->waiter = current;
> +        cio->endio = endio;
> +        cio->private = private;
> +
> +        /*
> +         * If there is a error, copied will be set to least successfully
> +         * completed copied length
> +         */
> +        cio->copied = len;
> +        for (rem = len; rem > 0; rem -= chunk) {
> +                chunk = min(rem, max_copy_bytes);
> +
> +                offload_io = kzalloc(sizeof(*offload_io), GFP_KERNEL);
> +                if (!offload_io)
> +                        goto err_free_cio;
> +                offload_io->cio = cio;
> +                /*
> +                 * For partial completion, we use offload_io->offset to truncate
> +                 * successful copy length
> +                 */
> +                offload_io->offset = len - rem;
> +
> +                src_bio = bio_alloc(bdev, 0, REQ_OP_COPY_SRC, gfp);
> +                if (!src_bio)
> +                        goto err_free_offload_io;
> +                src_bio->bi_iter.bi_size = chunk;
> +                src_bio->bi_iter.bi_sector = pos_in >> SECTOR_SHIFT;
> +
> +                blk_start_plug(&plug);
> +                dst_bio = blk_next_bio(src_bio, bdev, 0, REQ_OP_COPY_DST, gfp);
> +                if (!dst_bio)
> +                        goto err_free_src_bio;
> +                dst_bio->bi_iter.bi_size = chunk;
> +                dst_bio->bi_iter.bi_sector = pos_out >> SECTOR_SHIFT;
> +                dst_bio->bi_end_io = blkdev_copy_offload_dst_endio;
> +                dst_bio->bi_private = offload_io;
> +
> +                atomic_inc(&cio->refcount);
> +                submit_bio(dst_bio);
> +                blk_finish_plug(&plug);
> +                pos_in += chunk;
> +                pos_out += chunk;
> +        }
> +
> +        if (atomic_dec_and_test(&cio->refcount))
> +                blkdev_copy_endio(cio);
> +        if (cio->endio)

Isn't it a problem if the memory of cio is released in blkdev_copy_endio()?
It is unlikely to occur if there is an inflight i/o earlier,
it would be nice to modify considering code flow.


> +                return -EIOCBQUEUED;
> +
> +        return blkdev_copy_wait_io_completion(cio);
> +
> +err_free_src_bio:
> +        bio_put(src_bio);
> +err_free_offload_io:
> +        kfree(offload_io);
> +err_free_cio:
> +        cio->copied = min_t(ssize_t, cio->copied, (len - rem));
> +        cio->status = -ENOMEM;
> +        if (rem == len) {
> +                kfree(cio);
> +                return cio->status;

isn't it a problem if the memory of cio is released?

Best Regards,
Jinyoung.


2023-10-02 04:26:24

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v16 08/12] nvmet: add copy command support for bdev and file ns

Hi Nitesh,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 7fc7222d9680366edeecc219c21ca96310bdbc10]

url: https://github.com/intel-lab-lkp/linux/commits/Nitesh-Shetty/block-Introduce-queue-limits-and-sysfs-for-copy-offload-support/20230920-170132
base: 7fc7222d9680366edeecc219c21ca96310bdbc10
patch link: https://lore.kernel.org/r/20230920080756.11919-9-nj.shetty%40samsung.com
patch subject: [PATCH v16 08/12] nvmet: add copy command support for bdev and file ns
config: i386-randconfig-061-20231002 (https://download.01.org/0day-ci/archive/20231002/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231002/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

sparse warnings: (new ones prefixed by >>)
>> drivers/nvme/target/io-cmd-bdev.c:498:30: sparse: sparse: restricted __le16 degrades to integer
>> drivers/nvme/target/io-cmd-bdev.c:514:41: sparse: sparse: incorrect type in argument 2 (different base types) @@ expected restricted blk_status_t [usertype] blk_sts @@ got int [assigned] [usertype] ret @@
drivers/nvme/target/io-cmd-bdev.c:514:41: sparse: expected restricted blk_status_t [usertype] blk_sts
drivers/nvme/target/io-cmd-bdev.c:514:41: sparse: got int [assigned] [usertype] ret

vim +498 drivers/nvme/target/io-cmd-bdev.c

478
479 /*
480 * At present we handle only one range entry, since copy offload is aligned with
481 * copy_file_range, only one entry is passed from block layer.
482 */
483 static void nvmet_bdev_execute_copy(struct nvmet_req *rq)
484 {
485 struct nvme_copy_range range;
486 struct nvme_command *cmd = rq->cmd;
487 ssize_t ret;
488 off_t dst, src;
489
490 u16 status;
491
492 status = nvmet_copy_from_sgl(rq, 0, &range, sizeof(range));
493 if (status)
494 goto err_rq_complete;
495
496 dst = le64_to_cpu(cmd->copy.sdlba) << rq->ns->blksize_shift;
497 src = le64_to_cpu(range.slba) << rq->ns->blksize_shift;
> 498 rq->copy_len = (range.nlb + 1) << rq->ns->blksize_shift;
499
500 if (bdev_max_copy_sectors(rq->ns->bdev)) {
501 ret = blkdev_copy_offload(rq->ns->bdev, dst, src, rq->copy_len,
502 nvmet_bdev_copy_endio,
503 (void *)rq, GFP_KERNEL);
504 } else {
505 ret = blkdev_copy_emulation(rq->ns->bdev, dst,
506 rq->ns->bdev, src, rq->copy_len,
507 nvmet_bdev_copy_endio,
508 (void *)rq, GFP_KERNEL);
509 }
510 if (ret == -EIOCBQUEUED)
511 return;
512
513 rq->cqe->result.u32 = cpu_to_le32(0);
> 514 status = blk_to_nvme_status(rq, ret);
515 err_rq_complete:
516 nvmet_req_complete(rq, status);
517 }
518

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki