Hi all.
I am happy to offer a improved version of the Block Devices Snapshots
Module. It allows to create non-persistent snapshots of any block devices.
The main purpose of such snapshots is to provide backups of block devices.
See more in Documentation/block/blksnap.rst.
The Block Device Filtering Mechanism is added to the block layer. This
allows to attach and detach block device filters to the block layer.
Filters allow to extend the functionality of the block layer.
See more in Documentation/block/blkfilter.rst.
The tool, library and tests for working with blksnap can be found on github.
Link: https://github.com/veeam/blksnap/tree/stable-v2.0
There are few changes in this patch version. The experience of using the
out-of-tree version of the blksnap module on real servers was taken into
account.
v5 changes:
- Rebase for "kernel/git/axboe/linux-block.git" branch "for-6.5/block".
Link: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.5/block
v4 changes:
- Structures for describing the state of chunks are allocated dynamically.
This reduces memory consumption, since the struct chunk is allocated only
for those blocks for which the snapshot image state differs from the
original block device.
- The algorithm for calculating the chunk size depending on the size of the
block device has been changed. For large block devices, it is now
possible to allocate a larger number of chunks, and their size is smaller.
- For block devices, a 'filter' file has been added to /sys/block/<device>.
It displays the name of the filter that is attached to the block device.
- Fixed a problem with the lack of protection against re-adding a block
device to a snapshot.
- Fixed a bug in the algorithm of allocating the next bio for a chunk.
This problem was accurred on large disks, for which a chunk consists of
at least two bio.
- The ownership mechanism of the diff_area structure has been changed.
This fixed the error of prematurely releasing the diff_area structure
when destroying the snapshot.
- Documentation corrected.
- The Sparse analyzer is passed.
- Use __u64 type instead pointers in UAPI.
v3 changes:
- New block device I/O controls BLKFILTER_ATTACH and BLKFILTER_DETACH allow
to attach and detach filters.
- New block device I/O control BLKFILTER_CTL allow send command to attached
block device filter.
- The copy-on-write algorithm for processing I/O units has been optimized
and has become asynchronous.
- The snapshot image reading algorithm has been optimized and has become
asynchronous.
- Optimized the finite state machine for processing chunks.
- Fixed a tracking block size calculation bug.
v2 changes:
- Added documentation for Block Device Filtering Mechanism.
- Added documentation for Block Devices Snapshots Module (blksnap).
- The MAINTAINERS file has been updated.
- Optimized queue code for snapshot images.
- Fixed comments, log messages and code for better readability.
v1 changes:
- Forgotten "static" declarations have been added.
- The text of the comments has been corrected.
- It is possible to connect only one filter, since there are no others in
upstream.
- Do not have additional locks for attach/detach filter.
- blksnap.h moved to include/uapi/.
- #pragma once and commented code removed.
- uuid_t removed from user API.
- Removed default values for module parameters from the configuration file.
- The debugging code for tracking memory leaks has been removed.
- Simplified Makefile.
- Optimized work with large memory buffers, CBT tables are now in virtual
memory.
- The allocation code of minor numbers has been optimized.
- The implementation of the snapshot image block device has been
simplified, now it is a bio-based block device.
- Removed initialization of global variables with null values.
- only one bio is used to copy one chunk.
- Checked on ppc64le.
Thanks for preparing v4 patch:
- Christoph Hellwig <[email protected]> for his significant contribution
to the project.
- Fabio Fantoni <[email protected]> for his participation in the
project, useful advice and faith in the success of the project.
- Donald Buczek <[email protected]> for researching the module and
user-space tool. His fresh look revealed a number of flaw.
- Bagas Sanjaya <[email protected]> for comments on the documentation.
Sergei Shtepa (11):
documentation: Block Device Filtering Mechanism
block: Block Device Filtering Mechanism
documentation: Block Devices Snapshots Module
blksnap: header file of the module interface
blksnap: module management interface functions
blksnap: handling and tracking I/O units
blksnap: minimum data storage unit of the original block device
blksnap: difference storage
blksnap: event queue from the difference storage
blksnap: snapshot and snapshot image block device
blksnap: Kconfig and Makefile
Documentation/block/blkfilter.rst | 64 ++++
Documentation/block/blksnap.rst | 345 +++++++++++++++++
Documentation/block/index.rst | 2 +
MAINTAINERS | 17 +
block/Makefile | 3 +-
block/bdev.c | 1 +
block/blk-core.c | 27 ++
block/blk-filter.c | 213 ++++++++++
block/blk.h | 11 +
block/genhd.c | 10 +
block/ioctl.c | 7 +
block/partitions/core.c | 10 +
drivers/block/Kconfig | 2 +
drivers/block/Makefile | 2 +
drivers/block/blksnap/Kconfig | 12 +
drivers/block/blksnap/Makefile | 15 +
drivers/block/blksnap/cbt_map.c | 227 +++++++++++
drivers/block/blksnap/cbt_map.h | 90 +++++
drivers/block/blksnap/chunk.c | 454 ++++++++++++++++++++++
drivers/block/blksnap/chunk.h | 114 ++++++
drivers/block/blksnap/diff_area.c | 554 +++++++++++++++++++++++++++
drivers/block/blksnap/diff_area.h | 144 +++++++
drivers/block/blksnap/diff_buffer.c | 127 ++++++
drivers/block/blksnap/diff_buffer.h | 37 ++
drivers/block/blksnap/diff_storage.c | 316 +++++++++++++++
drivers/block/blksnap/diff_storage.h | 111 ++++++
drivers/block/blksnap/event_queue.c | 87 +++++
drivers/block/blksnap/event_queue.h | 65 ++++
drivers/block/blksnap/main.c | 483 +++++++++++++++++++++++
drivers/block/blksnap/params.h | 16 +
drivers/block/blksnap/snapimage.c | 124 ++++++
drivers/block/blksnap/snapimage.h | 10 +
drivers/block/blksnap/snapshot.c | 443 +++++++++++++++++++++
drivers/block/blksnap/snapshot.h | 68 ++++
drivers/block/blksnap/tracker.c | 339 ++++++++++++++++
drivers/block/blksnap/tracker.h | 75 ++++
include/linux/blk-filter.h | 51 +++
include/linux/blk_types.h | 2 +
include/linux/blkdev.h | 1 +
include/uapi/linux/blk-filter.h | 35 ++
include/uapi/linux/blksnap.h | 421 ++++++++++++++++++++
include/uapi/linux/fs.h | 3 +
42 files changed, 5137 insertions(+), 1 deletion(-)
create mode 100644 Documentation/block/blkfilter.rst
create mode 100644 Documentation/block/blksnap.rst
create mode 100644 block/blk-filter.c
create mode 100644 drivers/block/blksnap/Kconfig
create mode 100644 drivers/block/blksnap/Makefile
create mode 100644 drivers/block/blksnap/cbt_map.c
create mode 100644 drivers/block/blksnap/cbt_map.h
create mode 100644 drivers/block/blksnap/chunk.c
create mode 100644 drivers/block/blksnap/chunk.h
create mode 100644 drivers/block/blksnap/diff_area.c
create mode 100644 drivers/block/blksnap/diff_area.h
create mode 100644 drivers/block/blksnap/diff_buffer.c
create mode 100644 drivers/block/blksnap/diff_buffer.h
create mode 100644 drivers/block/blksnap/diff_storage.c
create mode 100644 drivers/block/blksnap/diff_storage.h
create mode 100644 drivers/block/blksnap/event_queue.c
create mode 100644 drivers/block/blksnap/event_queue.h
create mode 100644 drivers/block/blksnap/main.c
create mode 100644 drivers/block/blksnap/params.h
create mode 100644 drivers/block/blksnap/snapimage.c
create mode 100644 drivers/block/blksnap/snapimage.h
create mode 100644 drivers/block/blksnap/snapshot.c
create mode 100644 drivers/block/blksnap/snapshot.h
create mode 100644 drivers/block/blksnap/tracker.c
create mode 100644 drivers/block/blksnap/tracker.h
create mode 100644 include/linux/blk-filter.h
create mode 100644 include/uapi/linux/blk-filter.h
create mode 100644 include/uapi/linux/blksnap.h
--
2.20.1
Allows to build a module and add the blksnap to the kernel tree.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
drivers/block/Kconfig | 2 ++
drivers/block/Makefile | 2 ++
drivers/block/blksnap/Kconfig | 12 ++++++++++++
drivers/block/blksnap/Makefile | 15 +++++++++++++++
4 files changed, 31 insertions(+)
create mode 100644 drivers/block/blksnap/Kconfig
create mode 100644 drivers/block/blksnap/Makefile
diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index 5b9d4aaebb81..74d2d55526a3 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -404,4 +404,6 @@ config BLKDEV_UBLK_LEGACY_OPCODES
source "drivers/block/rnbd/Kconfig"
+source "drivers/block/blksnap/Kconfig"
+
endif # BLK_DEV
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 101612cba303..9a2a9a56a247 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -40,3 +40,5 @@ obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk/
obj-$(CONFIG_BLK_DEV_UBLK) += ublk_drv.o
swim_mod-y := swim.o swim_asm.o
+
+obj-$(CONFIG_BLKSNAP) += blksnap/
diff --git a/drivers/block/blksnap/Kconfig b/drivers/block/blksnap/Kconfig
new file mode 100644
index 000000000000..14081359847b
--- /dev/null
+++ b/drivers/block/blksnap/Kconfig
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Block device snapshot module configuration
+#
+
+config BLKSNAP
+ tristate "Block Devices Snapshots Module (blksnap)"
+ help
+ Allow to create snapshots and track block changes for block devices.
+ Designed for creating backups for simple block devices. Snapshots are
+ temporary and are released then backup is completed. Change block
+ tracking allows to create incremental or differential backups.
diff --git a/drivers/block/blksnap/Makefile b/drivers/block/blksnap/Makefile
new file mode 100644
index 000000000000..8d528b95579a
--- /dev/null
+++ b/drivers/block/blksnap/Makefile
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0
+
+blksnap-y := \
+ cbt_map.o \
+ chunk.o \
+ diff_area.o \
+ diff_buffer.o \
+ diff_storage.o \
+ event_queue.o \
+ main.o \
+ snapimage.o \
+ snapshot.o \
+ tracker.o
+
+obj-$(CONFIG_BLKSNAP) += blksnap.o
--
2.20.1
Provides management of difference blocks of block devices. Storing
difference blocks, and reading them to get a snapshot images.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
drivers/block/blksnap/diff_area.c | 554 +++++++++++++++++++++++++++
drivers/block/blksnap/diff_area.h | 144 +++++++
drivers/block/blksnap/diff_buffer.c | 127 ++++++
drivers/block/blksnap/diff_buffer.h | 37 ++
drivers/block/blksnap/diff_storage.c | 316 +++++++++++++++
drivers/block/blksnap/diff_storage.h | 111 ++++++
6 files changed, 1289 insertions(+)
create mode 100644 drivers/block/blksnap/diff_area.c
create mode 100644 drivers/block/blksnap/diff_area.h
create mode 100644 drivers/block/blksnap/diff_buffer.c
create mode 100644 drivers/block/blksnap/diff_buffer.h
create mode 100644 drivers/block/blksnap/diff_storage.c
create mode 100644 drivers/block/blksnap/diff_storage.h
diff --git a/drivers/block/blksnap/diff_area.c b/drivers/block/blksnap/diff_area.c
new file mode 100644
index 000000000000..169fa003b6d6
--- /dev/null
+++ b/drivers/block/blksnap/diff_area.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-diff-area: " fmt
+
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/build_bug.h>
+#include <uapi/linux/blksnap.h>
+#include "chunk.h"
+#include "diff_buffer.h"
+#include "diff_storage.h"
+#include "params.h"
+
+static inline sector_t diff_area_chunk_offset(struct diff_area *diff_area,
+ sector_t sector)
+{
+ return sector & ((1ull << (diff_area->chunk_shift - SECTOR_SHIFT)) - 1);
+}
+
+static inline unsigned long diff_area_chunk_number(struct diff_area *diff_area,
+ sector_t sector)
+{
+ return (unsigned long)(sector >>
+ (diff_area->chunk_shift - SECTOR_SHIFT));
+}
+
+static inline sector_t chunk_sector(struct chunk *chunk)
+{
+ return (sector_t)(chunk->number)
+ << (chunk->diff_area->chunk_shift - SECTOR_SHIFT);
+}
+
+static inline sector_t last_chunk_size(sector_t sector_count, sector_t capacity)
+{
+ sector_t capacity_rounded = round_down(capacity, sector_count);
+
+ if (capacity > capacity_rounded)
+ sector_count = capacity - capacity_rounded;
+
+ return sector_count;
+}
+
+static inline unsigned long long count_by_shift(sector_t capacity,
+ unsigned long long shift)
+{
+ unsigned long long shift_sector = (shift - SECTOR_SHIFT);
+
+ return round_up(capacity, (1ull << shift_sector)) >> shift_sector;
+}
+
+static inline struct chunk *chunk_alloc(struct diff_area *diff_area,
+ unsigned long number)
+{
+ struct chunk *chunk;
+
+ chunk = kzalloc(sizeof(struct chunk), GFP_NOIO);
+ if (!chunk)
+ return NULL;
+
+ INIT_LIST_HEAD(&chunk->link);
+ sema_init(&chunk->lock, 1);
+ chunk->diff_area = NULL;
+ chunk->number = number;
+ chunk->state = CHUNK_ST_NEW;
+
+ chunk->sector_count = diff_area_chunk_sectors(diff_area);
+ /*
+ * The last chunk has a special size.
+ */
+ if (unlikely((number + 1) == diff_area->chunk_count)) {
+ chunk->sector_count = bdev_nr_sectors(diff_area->orig_bdev) -
+ (chunk->sector_count * number);
+ }
+
+ return chunk;
+}
+
+static inline void chunk_free(struct diff_area *diff_area, struct chunk *chunk)
+{
+ down(&chunk->lock);
+ if (chunk->diff_buffer)
+ diff_buffer_release(diff_area, chunk->diff_buffer);
+ diff_storage_free_region(chunk->diff_region);
+ up(&chunk->lock);
+ kfree(chunk);
+}
+
+static void diff_area_calculate_chunk_size(struct diff_area *diff_area)
+{
+ unsigned long count;
+ unsigned long shift = get_chunk_minimum_shift();
+ sector_t capacity;
+ sector_t min_io_sect;
+
+ min_io_sect = (sector_t)(bdev_io_min(diff_area->orig_bdev) >>
+ SECTOR_SHIFT);
+ capacity = bdev_nr_sectors(diff_area->orig_bdev);
+ pr_debug("Minimal IO block %llu sectors\n", min_io_sect);
+ pr_debug("Device capacity %llu sectors\n", capacity);
+
+ count = count_by_shift(capacity, shift);
+ pr_debug("Chunks count %lu\n", count);
+ while ((count > get_chunk_maximum_count()) ||
+ ((1ul << (shift - SECTOR_SHIFT)) < min_io_sect)) {
+ shift++;
+ count = count_by_shift(capacity, shift);
+ pr_debug("Chunks count %lu\n", count);
+ }
+
+ diff_area->chunk_shift = shift;
+ diff_area->chunk_count = (unsigned long)DIV_ROUND_UP_ULL(capacity,
+ (1ul << (shift - SECTOR_SHIFT)));
+}
+
+void diff_area_free(struct kref *kref)
+{
+ unsigned long inx = 0;
+ struct chunk *chunk;
+ struct diff_area *diff_area =
+ container_of(kref, struct diff_area, kref);
+
+ might_sleep();
+
+ flush_work(&diff_area->store_queue_work);
+ xa_for_each(&diff_area->chunk_map, inx, chunk)
+ if (chunk)
+ chunk_free(diff_area, chunk);
+ xa_destroy(&diff_area->chunk_map);
+
+ if (diff_area->orig_bdev) {
+ blkdev_put(diff_area->orig_bdev, FMODE_READ | FMODE_WRITE);
+ diff_area->orig_bdev = NULL;
+ }
+
+ /* Clean up free_diff_buffers */
+ diff_buffer_cleanup(diff_area);
+
+ kfree(diff_area);
+}
+
+static inline bool diff_area_store_one(struct diff_area *diff_area)
+{
+ struct chunk *iter, *chunk = NULL;
+
+ spin_lock(&diff_area->store_queue_lock);
+ list_for_each_entry(iter, &diff_area->store_queue, link) {
+ if (!down_trylock(&iter->lock)) {
+ chunk = iter;
+ atomic_dec(&diff_area->store_queue_count);
+ list_del_init(&chunk->link);
+ chunk->diff_area = diff_area_get(diff_area);
+ break;
+ }
+ /*
+ * If it is not possible to lock a chunk for writing,
+ * then it is currently in use, and we try to clean up the
+ * next chunk.
+ */
+ }
+ spin_unlock(&diff_area->store_queue_lock);
+ if (!chunk)
+ return false;
+
+ if (chunk->state != CHUNK_ST_IN_MEMORY) {
+ /*
+ * There cannot be a chunk in the store queue whose buffer has
+ * not been read into memory.
+ */
+ chunk_up(chunk);
+ pr_warn("Cannot release empty buffer for chunk #%ld",
+ chunk->number);
+ return true;
+ }
+
+ if (diff_area_is_corrupted(diff_area)) {
+ chunk_store_failed(chunk, 0);
+ return true;
+ }
+
+ if (!chunk->diff_region) {
+ struct diff_region *diff_region;
+
+ diff_region = diff_storage_new_region(
+ diff_area->diff_storage,
+ diff_area_chunk_sectors(diff_area),
+ diff_area->logical_blksz);
+
+ if (IS_ERR(diff_region)) {
+ pr_debug("Cannot get store for chunk #%ld\n",
+ chunk->number);
+ chunk_store_failed(chunk, PTR_ERR(diff_region));
+ return true;
+ }
+ chunk->diff_region = diff_region;
+ }
+ chunk_store(chunk);
+ return true;
+}
+
+static void diff_area_store_queue_work(struct work_struct *work)
+{
+ struct diff_area *diff_area = container_of(
+ work, struct diff_area, store_queue_work);
+
+ while (diff_area_store_one(diff_area))
+ ;
+}
+
+struct diff_area *diff_area_new(dev_t dev_id, struct diff_storage *diff_storage)
+{
+ int ret = 0;
+ struct diff_area *diff_area = NULL;
+ struct block_device *bdev;
+
+ pr_debug("Open device [%u:%u]\n", MAJOR(dev_id), MINOR(dev_id));
+
+ bdev = blkdev_get_by_dev(dev_id, FMODE_READ | FMODE_WRITE, NULL, NULL);
+ if (IS_ERR(bdev)) {
+ int err = PTR_ERR(bdev);
+
+ pr_err("Failed to open device. errno=%d\n", abs(err));
+ return ERR_PTR(err);
+ }
+
+ diff_area = kzalloc(sizeof(struct diff_area), GFP_KERNEL);
+ if (!diff_area) {
+ blkdev_put(bdev, FMODE_READ | FMODE_WRITE);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ kref_init(&diff_area->kref);
+ diff_area->orig_bdev = bdev;
+ diff_area->diff_storage = diff_storage;
+
+ diff_area_calculate_chunk_size(diff_area);
+ if (diff_area->chunk_shift > get_chunk_maximum_shift()) {
+ pr_info("The maximum allowable chunk size has been reached.\n");
+ return ERR_PTR(-EFAULT);
+ }
+ pr_debug("The optimal chunk size was calculated as %llu bytes for device [%d:%d]\n",
+ (1ull << diff_area->chunk_shift),
+ MAJOR(diff_area->orig_bdev->bd_dev),
+ MINOR(diff_area->orig_bdev->bd_dev));
+
+ xa_init(&diff_area->chunk_map);
+
+ spin_lock_init(&diff_area->store_queue_lock);
+ INIT_LIST_HEAD(&diff_area->store_queue);
+ atomic_set(&diff_area->store_queue_count, 0);
+ INIT_WORK(&diff_area->store_queue_work, diff_area_store_queue_work);
+
+ spin_lock_init(&diff_area->free_diff_buffers_lock);
+ INIT_LIST_HEAD(&diff_area->free_diff_buffers);
+ atomic_set(&diff_area->free_diff_buffers_count, 0);
+
+ diff_area->physical_blksz = bdev->bd_queue->limits.physical_block_size;
+ diff_area->logical_blksz = bdev->bd_queue->limits.logical_block_size;
+ diff_area->corrupt_flag = 0;
+
+ if (!diff_storage->capacity) {
+ pr_err("Difference storage is empty\n");
+ pr_err("In-memory difference storage is not supported\n");
+ ret = -EFAULT;
+ }
+
+ if (ret) {
+ diff_area_put(diff_area);
+ return ERR_PTR(ret);
+ }
+
+ return diff_area;
+}
+
+static inline unsigned int chunk_limit(struct chunk *chunk,
+ struct bvec_iter *iter)
+{
+ sector_t chunk_ofs = iter->bi_sector - chunk_sector(chunk);
+ sector_t chunk_left = chunk->sector_count - chunk_ofs;
+
+ return min(iter->bi_size, (unsigned int)(chunk_left << SECTOR_SHIFT));
+}
+
+/*
+ * Implements the copy-on-write mechanism.
+ */
+bool diff_area_cow(struct bio *bio, struct diff_area *diff_area,
+ struct bvec_iter *iter)
+{
+ bool nowait = bio->bi_opf & REQ_NOWAIT;
+ struct bio *chunk_bio = NULL;
+ LIST_HEAD(chunks);
+ int ret = 0;
+
+ while (iter->bi_size) {
+ unsigned long nr = diff_area_chunk_number(diff_area,
+ iter->bi_sector);
+ struct chunk *chunk = xa_load(&diff_area->chunk_map, nr);
+ unsigned int len;
+
+ if (!chunk) {
+ chunk = chunk_alloc(diff_area, nr);
+ if (!chunk) {
+ diff_area_set_corrupted(diff_area, -EINVAL);
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ ret = xa_insert(&diff_area->chunk_map, nr, chunk,
+ GFP_NOIO);
+ if (likely(!ret)) {
+ /* new chunk has been added */
+ } else if (ret == -EBUSY) {
+ /* another chunk has just been created */
+ chunk_free(diff_area, chunk);
+ chunk = xa_load(&diff_area->chunk_map, nr);
+ WARN_ON_ONCE(!chunk);
+ if (unlikely(!chunk)) {
+ ret = -EINVAL;
+ diff_area_set_corrupted(diff_area, ret);
+ goto fail;
+ }
+ } else if (ret) {
+ pr_err("Failed insert chunk to chunk map\n");
+ chunk_free(diff_area, chunk);
+ diff_area_set_corrupted(diff_area, ret);
+ goto fail;
+ }
+ }
+
+ if (nowait) {
+ if (down_trylock(&chunk->lock)) {
+ ret = -EAGAIN;
+ goto fail;
+ }
+ } else {
+ ret = down_killable(&chunk->lock);
+ if (unlikely(ret))
+ goto fail;
+ }
+ chunk->diff_area = diff_area_get(diff_area);
+
+ len = chunk_limit(chunk, iter);
+ if (chunk->state == CHUNK_ST_NEW) {
+ if (nowait) {
+ /*
+ * If the data of this chunk has not yet been
+ * copied to the difference storage, then it is
+ * impossible to process the I/O write unit with
+ * the NOWAIT flag.
+ */
+ chunk_up(chunk);
+ ret = -EAGAIN;
+ goto fail;
+ }
+
+ /*
+ * Load the chunk asynchronously.
+ */
+ ret = chunk_load_and_postpone_io(chunk, &chunk_bio);
+ if (ret) {
+ chunk_up(chunk);
+ goto fail;
+ }
+ list_add_tail(&chunk->link, &chunks);
+ } else {
+ /*
+ * The chunk has already been:
+ * - failed, when the snapshot is corrupted
+ * - read into the buffer
+ * - stored into the diff storage
+ * In this case, we do not change the chunk.
+ */
+ chunk_up(chunk);
+ }
+ bio_advance_iter_single(bio, iter, len);
+ }
+
+ if (chunk_bio) {
+ /* Postpone bio processing in a callback. */
+ chunk_load_and_postpone_io_finish(&chunks, chunk_bio, bio);
+ return true;
+ }
+ /* Pass bio to the low level */
+ return false;
+
+fail:
+ if (chunk_bio) {
+ chunk_bio->bi_status = errno_to_blk_status(ret);
+ bio_endio(chunk_bio);
+ }
+
+ if (ret == -EAGAIN) {
+ /*
+ * The -EAGAIN error code means that it is not possible to
+ * process a I/O unit with a flag REQ_NOWAIT.
+ * I/O unit processing is being completed with such error.
+ */
+ bio->bi_status = BLK_STS_AGAIN;
+ bio_endio(bio);
+ return true;
+ }
+ /* In any other case, the processing of the I/O unit continues. */
+ return false;
+}
+
+static void orig_clone_endio(struct bio *bio)
+{
+ struct bio *orig_bio = bio->bi_private;
+
+ if (unlikely(bio->bi_status != BLK_STS_OK))
+ bio_io_error(orig_bio);
+ else
+ bio_endio(orig_bio);
+}
+
+static void orig_clone_bio(struct diff_area *diff_area, struct bio *bio)
+{
+ struct bio *new_bio;
+ struct block_device *bdev = diff_area->orig_bdev;
+ sector_t chunk_limit;
+
+ new_bio = chunk_alloc_clone(bdev, bio);
+ WARN_ON(!new_bio);
+
+ chunk_limit = diff_area_chunk_sectors(diff_area) -
+ diff_area_chunk_offset(diff_area, bio->bi_iter.bi_sector);
+
+ new_bio->bi_iter.bi_sector = bio->bi_iter.bi_sector;
+ new_bio->bi_iter.bi_size = min_t(unsigned int,
+ bio->bi_iter.bi_size, chunk_limit << SECTOR_SHIFT);
+
+ bio_set_flag(new_bio, BIO_FILTERED);
+ new_bio->bi_end_io = orig_clone_endio;
+ new_bio->bi_private = bio;
+
+ bio_advance(bio, new_bio->bi_iter.bi_size);
+ bio_inc_remaining(bio);
+
+ submit_bio_noacct(new_bio);
+}
+
+bool diff_area_submit_chunk(struct diff_area *diff_area, struct bio *bio)
+{
+ int ret;
+ struct chunk *chunk;
+ unsigned long nr = diff_area_chunk_number(diff_area,
+ bio->bi_iter.bi_sector);
+
+ chunk = xa_load(&diff_area->chunk_map, nr);
+ /*
+ * If this chunk is not in the chunk map, then the COW algorithm did
+ * not access this part of the disk space, and writing to the snapshot
+ * in this part was also not performed.
+ */
+ if (!chunk) {
+ if (op_is_write(bio_op(bio))) {
+ /*
+ * To process a write bio, we need to allocate a new
+ * chunk.
+ */
+ chunk = chunk_alloc(diff_area, nr);
+ WARN_ON_ONCE(!chunk);
+ if (unlikely(!chunk))
+ return false;
+
+ ret = xa_insert(&diff_area->chunk_map, nr, chunk,
+ GFP_NOIO);
+ if (likely(!ret)) {
+ /* new chunk has been added */
+ } else if (ret == -EBUSY) {
+ /* another chunk has just been created */
+ chunk_free(diff_area, chunk);
+ chunk = xa_load(&diff_area->chunk_map, nr);
+ WARN_ON_ONCE(!chunk);
+ if (unlikely(!chunk))
+ return false;
+ } else if (ret) {
+ pr_err("Failed insert chunk to chunk map\n");
+ chunk_free(diff_area, chunk);
+ return false;
+ }
+ } else {
+ /*
+ * To read, we simply redirect the bio to the original
+ * block device.
+ */
+ orig_clone_bio(diff_area, bio);
+ return true;
+ }
+ }
+
+ if (down_killable(&chunk->lock))
+ return false;
+ chunk->diff_area = diff_area_get(diff_area);
+
+ if (unlikely(chunk->state == CHUNK_ST_FAILED)) {
+ pr_err("Chunk #%ld corrupted\n", chunk->number);
+ chunk_up(chunk);
+ return false;
+ }
+ if (chunk->state == CHUNK_ST_IN_MEMORY) {
+ /*
+ * Directly copy data from the in-memory chunk or
+ * copy to the in-memory chunk for write operation.
+ */
+ chunk_copy_bio(chunk, bio, &bio->bi_iter);
+ chunk_up(chunk);
+ return true;
+ }
+ if ((chunk->state == CHUNK_ST_STORED) || !op_is_write(bio_op(bio))) {
+ /*
+ * Read data from the chunk on difference storage.
+ */
+ chunk_clone_bio(chunk, bio);
+ chunk_up(chunk);
+ return true;
+ }
+ /*
+ * Starts asynchronous loading of a chunk from the original block device
+ * or difference storage and schedule copying data to (or from) the
+ * in-memory chunk.
+ */
+ if (chunk_load_and_schedule_io(chunk, bio)) {
+ chunk_up(chunk);
+ return false;
+ }
+ return true;
+}
+
+static inline void diff_area_event_corrupted(struct diff_area *diff_area)
+{
+ struct blksnap_event_corrupted data = {
+ .dev_id_mj = MAJOR(diff_area->orig_bdev->bd_dev),
+ .dev_id_mn = MINOR(diff_area->orig_bdev->bd_dev),
+ .err_code = abs(diff_area->error_code),
+ };
+
+ event_gen(&diff_area->diff_storage->event_queue, GFP_NOIO,
+ blksnap_event_code_corrupted, &data,
+ sizeof(struct blksnap_event_corrupted));
+}
+
+void diff_area_set_corrupted(struct diff_area *diff_area, int err_code)
+{
+ if (test_and_set_bit(0, &diff_area->corrupt_flag))
+ return;
+
+ diff_area->error_code = err_code;
+ diff_area_event_corrupted(diff_area);
+
+ pr_err("Set snapshot device is corrupted for [%u:%u] with error code %d\n",
+ MAJOR(diff_area->orig_bdev->bd_dev),
+ MINOR(diff_area->orig_bdev->bd_dev), abs(err_code));
+}
diff --git a/drivers/block/blksnap/diff_area.h b/drivers/block/blksnap/diff_area.h
new file mode 100644
index 000000000000..6ecec9390282
--- /dev/null
+++ b/drivers/block/blksnap/diff_area.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_DIFF_AREA_H
+#define __BLKSNAP_DIFF_AREA_H
+
+#include <linux/slab.h>
+#include <linux/uio.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/xarray.h>
+#include "event_queue.h"
+
+struct diff_storage;
+struct chunk;
+
+/**
+ * struct diff_area - Describes the difference area for one original device.
+ *
+ * @kref:
+ * The reference counter allows to manage the lifetime of an object.
+ * @orig_bdev:
+ * A pointer to the structure of an opened block device.
+ * @diff_storage:
+ * Pointer to difference storage for storing difference data.
+ * @chunk_shift:
+ * Power of 2 used to specify the chunk size. This allows to set different
+ * chunk sizes for huge and small block devices.
+ * @chunk_count:
+ * Count of chunks. The number of chunks into which the block device
+ * is divided.
+ * @chunk_map:
+ * A map of chunks.
+ * @store_queue_lock:
+ * This spinlock guarantees consistency of the linked lists of chunks
+ * queue.
+ * @store_queue:
+ * The queue of chunks waiting to be stored to the difference storage.
+ * @store_queue_count:
+ * The number of chunks in the store queue.
+ * @store_queue_work:
+ * The workqueue work item. This worker limits the number of chunks
+ * that store their data in RAM.
+ * @free_diff_buffers_lock:
+ * This spinlock guarantees consistency of the linked lists of
+ * free difference buffers.
+ * @free_diff_buffers:
+ * Linked list of free difference buffers allows to reduce the number
+ * of buffer allocation and release operations.
+ * @physical_blksz:
+ * The physical block size for the snapshot image is equal to the
+ * physical block size of the original device.
+ * @logical_blksz:
+ * The logical block size for the snapshot image is equal to the
+ * logical block size of the original device.
+ * @free_diff_buffers_count:
+ * The number of free difference buffers in the linked list.
+ * @corrupt_flag:
+ * The flag is set if an error occurred in the operation of the data
+ * saving mechanism in the diff area. In this case, an error will be
+ * generated when reading from the snapshot image.
+ * @error_code:
+ * The error code that caused the snapshot to be corrupted.
+ *
+ * The &struct diff_area is created for each block device in the snapshot.
+ * It is used to save the differences between the original block device and
+ * the snapshot image. That is, when writing data to the original device,
+ * the differences are copied as chunks to the difference storage.
+ * Reading and writing from the snapshot image is also performed using
+ * &struct diff_area.
+ *
+ * The xarray has a limit on the maximum size. This can be especially
+ * noticeable on 32-bit systems. This creates a limit in the size of
+ * supported disks.
+ *
+ * For example, for a 256 TiB disk with a block size of 65536 bytes, the
+ * number of elements in the chunk map will be equal to 2 with a power of 32.
+ * Therefore, the number of chunks into which the block device is divided is
+ * limited.
+ *
+ * The store queue allows to postpone the operation of storing a chunks data
+ * to the difference storage and perform it later in the worker thread.
+ *
+ * The linked list of difference buffers allows to have a certain number of
+ * "hot" buffers. This allows to reduce the number of allocations and releases
+ * of memory.
+ *
+ *
+ */
+struct diff_area {
+ struct kref kref;
+ struct block_device *orig_bdev;
+ struct diff_storage *diff_storage;
+
+ unsigned long chunk_shift;
+ unsigned long chunk_count;
+ struct xarray chunk_map;
+
+ spinlock_t store_queue_lock;
+ struct list_head store_queue;
+ atomic_t store_queue_count;
+ struct work_struct store_queue_work;
+
+ spinlock_t free_diff_buffers_lock;
+ struct list_head free_diff_buffers;
+ atomic_t free_diff_buffers_count;
+
+ unsigned int physical_blksz;
+ unsigned int logical_blksz;
+
+ unsigned long corrupt_flag;
+ int error_code;
+};
+
+struct diff_area *diff_area_new(dev_t dev_id,
+ struct diff_storage *diff_storage);
+void diff_area_free(struct kref *kref);
+static inline struct diff_area *diff_area_get(struct diff_area *diff_area)
+{
+ kref_get(&diff_area->kref);
+ return diff_area;
+};
+static inline void diff_area_put(struct diff_area *diff_area)
+{
+ kref_put(&diff_area->kref, diff_area_free);
+};
+
+void diff_area_set_corrupted(struct diff_area *diff_area, int err_code);
+static inline bool diff_area_is_corrupted(struct diff_area *diff_area)
+{
+ return !!diff_area->corrupt_flag;
+};
+static inline sector_t diff_area_chunk_sectors(struct diff_area *diff_area)
+{
+ return (sector_t)(1ull << (diff_area->chunk_shift - SECTOR_SHIFT));
+};
+bool diff_area_cow(struct bio *bio, struct diff_area *diff_area,
+ struct bvec_iter *iter);
+
+bool diff_area_submit_chunk(struct diff_area *diff_area, struct bio *bio);
+void diff_area_rw_chunk(struct kref *kref);
+
+#endif /* __BLKSNAP_DIFF_AREA_H */
diff --git a/drivers/block/blksnap/diff_buffer.c b/drivers/block/blksnap/diff_buffer.c
new file mode 100644
index 000000000000..77ad59cc46b3
--- /dev/null
+++ b/drivers/block/blksnap/diff_buffer.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-diff-buffer: " fmt
+
+#include "diff_buffer.h"
+#include "diff_area.h"
+#include "params.h"
+
+static void diff_buffer_free(struct diff_buffer *diff_buffer)
+{
+ size_t inx = 0;
+
+ if (unlikely(!diff_buffer))
+ return;
+
+ for (inx = 0; inx < diff_buffer->page_count; inx++) {
+ struct page *page = diff_buffer->pages[inx];
+
+ if (page)
+ __free_page(page);
+ }
+
+ kfree(diff_buffer);
+}
+
+static struct diff_buffer *
+diff_buffer_new(size_t page_count, size_t buffer_size, gfp_t gfp_mask)
+{
+ struct diff_buffer *diff_buffer;
+ size_t inx = 0;
+ struct page *page;
+
+ if (unlikely(page_count <= 0))
+ return NULL;
+
+ /*
+ * In case of overflow, it is better to get a null pointer
+ * than a pointer to some memory area. Therefore + 1.
+ */
+ diff_buffer = kzalloc(sizeof(struct diff_buffer) +
+ (page_count + 1) * sizeof(struct page *),
+ gfp_mask);
+ if (!diff_buffer)
+ return NULL;
+
+ INIT_LIST_HEAD(&diff_buffer->link);
+ diff_buffer->size = buffer_size;
+ diff_buffer->page_count = page_count;
+
+ for (inx = 0; inx < page_count; inx++) {
+ page = alloc_page(gfp_mask);
+ if (!page)
+ goto fail;
+
+ diff_buffer->pages[inx] = page;
+ }
+ return diff_buffer;
+fail:
+ diff_buffer_free(diff_buffer);
+ return NULL;
+}
+
+struct diff_buffer *diff_buffer_take(struct diff_area *diff_area)
+{
+ struct diff_buffer *diff_buffer = NULL;
+ sector_t chunk_sectors;
+ size_t page_count;
+ size_t buffer_size;
+
+ spin_lock(&diff_area->free_diff_buffers_lock);
+ diff_buffer = list_first_entry_or_null(&diff_area->free_diff_buffers,
+ struct diff_buffer, link);
+ if (diff_buffer) {
+ list_del(&diff_buffer->link);
+ atomic_dec(&diff_area->free_diff_buffers_count);
+ }
+ spin_unlock(&diff_area->free_diff_buffers_lock);
+
+ /* Return free buffer if it was found in a pool */
+ if (diff_buffer)
+ return diff_buffer;
+
+ /* Allocate new buffer */
+ chunk_sectors = diff_area_chunk_sectors(diff_area);
+ page_count = round_up(chunk_sectors, PAGE_SECTORS) / PAGE_SECTORS;
+ buffer_size = chunk_sectors << SECTOR_SHIFT;
+
+ diff_buffer =
+ diff_buffer_new(page_count, buffer_size, GFP_NOIO);
+ if (unlikely(!diff_buffer))
+ return ERR_PTR(-ENOMEM);
+ return diff_buffer;
+}
+
+void diff_buffer_release(struct diff_area *diff_area,
+ struct diff_buffer *diff_buffer)
+{
+ if (atomic_read(&diff_area->free_diff_buffers_count) >
+ get_free_diff_buffer_pool_size()) {
+ diff_buffer_free(diff_buffer);
+ return;
+ }
+ spin_lock(&diff_area->free_diff_buffers_lock);
+ list_add_tail(&diff_buffer->link, &diff_area->free_diff_buffers);
+ atomic_inc(&diff_area->free_diff_buffers_count);
+ spin_unlock(&diff_area->free_diff_buffers_lock);
+}
+
+void diff_buffer_cleanup(struct diff_area *diff_area)
+{
+ struct diff_buffer *diff_buffer = NULL;
+
+ do {
+ spin_lock(&diff_area->free_diff_buffers_lock);
+ diff_buffer =
+ list_first_entry_or_null(&diff_area->free_diff_buffers,
+ struct diff_buffer, link);
+ if (diff_buffer) {
+ list_del(&diff_buffer->link);
+ atomic_dec(&diff_area->free_diff_buffers_count);
+ }
+ spin_unlock(&diff_area->free_diff_buffers_lock);
+
+ if (diff_buffer)
+ diff_buffer_free(diff_buffer);
+ } while (diff_buffer);
+}
diff --git a/drivers/block/blksnap/diff_buffer.h b/drivers/block/blksnap/diff_buffer.h
new file mode 100644
index 000000000000..f81e56cf4b9a
--- /dev/null
+++ b/drivers/block/blksnap/diff_buffer.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_DIFF_BUFFER_H
+#define __BLKSNAP_DIFF_BUFFER_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/blkdev.h>
+
+struct diff_area;
+
+/**
+ * struct diff_buffer - Difference buffer.
+ * @link:
+ * The list header allows to create a pool of the diff_buffer structures.
+ * @size:
+ * Count of bytes in the buffer.
+ * @page_count:
+ * The number of pages reserved for the buffer.
+ * @pages:
+ * An array of pointers to pages.
+ *
+ * Describes the memory buffer for a chunk in the memory.
+ */
+struct diff_buffer {
+ struct list_head link;
+ size_t size;
+ size_t page_count;
+ struct page *pages[0];
+};
+
+struct diff_buffer *diff_buffer_take(struct diff_area *diff_area);
+void diff_buffer_release(struct diff_area *diff_area,
+ struct diff_buffer *diff_buffer);
+void diff_buffer_cleanup(struct diff_area *diff_area);
+#endif /* __BLKSNAP_DIFF_BUFFER_H */
diff --git a/drivers/block/blksnap/diff_storage.c b/drivers/block/blksnap/diff_storage.c
new file mode 100644
index 000000000000..1787fa6931a8
--- /dev/null
+++ b/drivers/block/blksnap/diff_storage.c
@@ -0,0 +1,316 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-diff-storage: " fmt
+
+#include <linux/slab.h>
+#include <linux/sched/mm.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/build_bug.h>
+#include <uapi/linux/blksnap.h>
+#include "chunk.h"
+#include "diff_buffer.h"
+#include "diff_storage.h"
+#include "params.h"
+
+/**
+ * struct storage_bdev - Information about the opened block device.
+ *
+ * @link:
+ * Allows to combine structures into a linked list.
+ * @bdev:
+ * A pointer to an open block device.
+ */
+struct storage_bdev {
+ struct list_head link;
+ struct block_device *bdev;
+};
+
+/**
+ * struct storage_block - A storage unit reserved for storing differences.
+ *
+ * @link:
+ * Allows to combine structures into a linked list.
+ * @bdev:
+ * A pointer to a block device.
+ * @sector:
+ * The number of the first sector of the range of allocated space for
+ * storing the difference.
+ * @count:
+ * The count of sectors in the range of allocated space for storing the
+ * difference.
+ * @used:
+ * The count of used sectors in the range of allocated space for storing
+ * the difference.
+ */
+struct storage_block {
+ struct list_head link;
+ struct block_device *bdev;
+ sector_t sector;
+ sector_t count;
+ sector_t used;
+};
+
+static inline void diff_storage_event_low(struct diff_storage *diff_storage)
+{
+ struct blksnap_event_low_free_space data = {
+ .requested_nr_sect = get_diff_storage_minimum(),
+ };
+
+ diff_storage->requested += data.requested_nr_sect;
+ pr_debug("Diff storage low free space. Portion: %llu sectors, requested: %llu\n",
+ data.requested_nr_sect, diff_storage->requested);
+ event_gen(&diff_storage->event_queue, GFP_NOIO,
+ blksnap_event_code_low_free_space, &data, sizeof(data));
+}
+
+struct diff_storage *diff_storage_new(void)
+{
+ struct diff_storage *diff_storage;
+
+ diff_storage = kzalloc(sizeof(struct diff_storage), GFP_KERNEL);
+ if (!diff_storage)
+ return NULL;
+
+ kref_init(&diff_storage->kref);
+ spin_lock_init(&diff_storage->lock);
+ INIT_LIST_HEAD(&diff_storage->storage_bdevs);
+ INIT_LIST_HEAD(&diff_storage->empty_blocks);
+ INIT_LIST_HEAD(&diff_storage->filled_blocks);
+
+ event_queue_init(&diff_storage->event_queue);
+ diff_storage_event_low(diff_storage);
+
+ return diff_storage;
+}
+
+static inline struct storage_block *
+first_empty_storage_block(struct diff_storage *diff_storage)
+{
+ return list_first_entry_or_null(&diff_storage->empty_blocks,
+ struct storage_block, link);
+};
+
+static inline struct storage_block *
+first_filled_storage_block(struct diff_storage *diff_storage)
+{
+ return list_first_entry_or_null(&diff_storage->filled_blocks,
+ struct storage_block, link);
+};
+
+static inline struct storage_bdev *
+first_storage_bdev(struct diff_storage *diff_storage)
+{
+ return list_first_entry_or_null(&diff_storage->storage_bdevs,
+ struct storage_bdev, link);
+};
+
+void diff_storage_free(struct kref *kref)
+{
+ struct diff_storage *diff_storage =
+ container_of(kref, struct diff_storage, kref);
+ struct storage_block *blk;
+ struct storage_bdev *storage_bdev;
+
+ while ((blk = first_empty_storage_block(diff_storage))) {
+ list_del(&blk->link);
+ kfree(blk);
+ }
+
+ while ((blk = first_filled_storage_block(diff_storage))) {
+ list_del(&blk->link);
+ kfree(blk);
+ }
+
+ while ((storage_bdev = first_storage_bdev(diff_storage))) {
+ blkdev_put(storage_bdev->bdev, FMODE_READ | FMODE_WRITE);
+ list_del(&storage_bdev->link);
+ kfree(storage_bdev);
+ }
+ event_queue_done(&diff_storage->event_queue);
+
+ kfree(diff_storage);
+}
+
+static struct block_device *diff_storage_add_storage_bdev(
+ struct diff_storage *diff_storage, const char *bdev_path)
+{
+ struct storage_bdev *storage_bdev, *existing_bdev = NULL;
+ struct block_device *bdev;
+
+ bdev = blkdev_get_by_path(bdev_path, FMODE_READ | FMODE_WRITE,
+ NULL, NULL);
+ if (IS_ERR(bdev)) {
+ pr_err("Failed to open device. errno=%ld\n", PTR_ERR(bdev));
+ return bdev;
+ }
+
+ spin_lock(&diff_storage->lock);
+ list_for_each_entry(existing_bdev, &diff_storage->storage_bdevs, link) {
+ if (existing_bdev->bdev == bdev)
+ break;
+ }
+ spin_unlock(&diff_storage->lock);
+
+ if (existing_bdev->bdev == bdev) {
+ blkdev_put(bdev, FMODE_READ | FMODE_WRITE);
+ return existing_bdev->bdev;
+ }
+
+ storage_bdev = kzalloc(sizeof(struct storage_bdev) +
+ strlen(bdev_path) + 1, GFP_KERNEL);
+ if (!storage_bdev) {
+ blkdev_put(bdev, FMODE_READ | FMODE_WRITE);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ INIT_LIST_HEAD(&storage_bdev->link);
+ storage_bdev->bdev = bdev;
+
+ spin_lock(&diff_storage->lock);
+ list_add_tail(&storage_bdev->link, &diff_storage->storage_bdevs);
+ spin_unlock(&diff_storage->lock);
+
+ return bdev;
+}
+
+static inline int diff_storage_add_range(struct diff_storage *diff_storage,
+ struct block_device *bdev,
+ sector_t sector, sector_t count)
+{
+ struct storage_block *storage_block;
+
+ pr_debug("Add range to diff storage: [%u:%u] %llu:%llu\n",
+ MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev), sector, count);
+
+ storage_block = kzalloc(sizeof(struct storage_block), GFP_KERNEL);
+ if (!storage_block)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&storage_block->link);
+ storage_block->bdev = bdev;
+ storage_block->sector = sector;
+ storage_block->count = count;
+
+ spin_lock(&diff_storage->lock);
+ list_add_tail(&storage_block->link, &diff_storage->empty_blocks);
+ diff_storage->capacity += count;
+ spin_unlock(&diff_storage->lock);
+
+ return 0;
+}
+
+int diff_storage_append_block(struct diff_storage *diff_storage,
+ const char *bdev_path,
+ struct blksnap_sectors __user *ranges,
+ unsigned int range_count)
+{
+ int ret;
+ int inx;
+ struct block_device *bdev;
+ struct blksnap_sectors range;
+
+ pr_debug("Append %u blocks\n", range_count);
+
+ bdev = diff_storage_add_storage_bdev(diff_storage, bdev_path);
+ if (IS_ERR(bdev))
+ return PTR_ERR(bdev);
+
+ for (inx = 0; inx < range_count; inx++) {
+ if (unlikely(copy_from_user(&range, ranges+inx, sizeof(range))))
+ return -EINVAL;
+
+ ret = diff_storage_add_range(diff_storage, bdev,
+ range.offset,
+ range.count);
+ if (unlikely(ret))
+ return ret;
+ }
+
+ if (atomic_read(&diff_storage->low_space_flag) &&
+ (diff_storage->capacity >= diff_storage->requested))
+ atomic_set(&diff_storage->low_space_flag, 0);
+
+ return 0;
+}
+
+static inline bool is_halffull(const sector_t sectors_left)
+{
+ return sectors_left <=
+ ((get_diff_storage_minimum() >> 1) & ~(PAGE_SECTORS - 1));
+}
+
+struct diff_region *diff_storage_new_region(struct diff_storage *diff_storage,
+ sector_t count,
+ unsigned int logical_blksz)
+{
+ int ret = 0;
+ struct diff_region *diff_region;
+ sector_t sectors_left;
+
+ if (atomic_read(&diff_storage->overflow_flag))
+ return ERR_PTR(-ENOSPC);
+
+ diff_region = kzalloc(sizeof(struct diff_region), GFP_NOIO);
+ if (!diff_region)
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock(&diff_storage->lock);
+ do {
+ struct storage_block *storage_block;
+ sector_t available;
+ struct request_queue *q;
+
+ storage_block = first_empty_storage_block(diff_storage);
+ if (unlikely(!storage_block)) {
+ atomic_inc(&diff_storage->overflow_flag);
+ ret = -ENOSPC;
+ break;
+ }
+
+ q = storage_block->bdev->bd_queue;
+ if (logical_blksz < q->limits.logical_block_size) {
+ pr_err("Incompatibility of block sizes was detected.");
+ ret = -ENOTBLK;
+ break;
+ }
+
+ available = storage_block->count - storage_block->used;
+ if (likely(available >= count)) {
+ diff_region->bdev = storage_block->bdev;
+ diff_region->sector =
+ storage_block->sector + storage_block->used;
+ diff_region->count = count;
+
+ storage_block->used += count;
+ diff_storage->filled += count;
+ break;
+ }
+
+ list_del(&storage_block->link);
+ list_add_tail(&storage_block->link,
+ &diff_storage->filled_blocks);
+ /*
+ * If there is still free space in the storage block, but
+ * it is not enough to store a piece, then such a block is
+ * considered used.
+ * We believe that the storage blocks are large enough
+ * to accommodate several pieces entirely.
+ */
+ diff_storage->filled += available;
+ } while (1);
+ sectors_left = diff_storage->requested - diff_storage->filled;
+ spin_unlock(&diff_storage->lock);
+
+ if (ret) {
+ pr_err("Cannot get empty storage block\n");
+ diff_storage_free_region(diff_region);
+ return ERR_PTR(ret);
+ }
+
+ if (is_halffull(sectors_left) &&
+ (atomic_inc_return(&diff_storage->low_space_flag) == 1))
+ diff_storage_event_low(diff_storage);
+
+ return diff_region;
+}
diff --git a/drivers/block/blksnap/diff_storage.h b/drivers/block/blksnap/diff_storage.h
new file mode 100644
index 000000000000..0913a0114ac0
--- /dev/null
+++ b/drivers/block/blksnap/diff_storage.h
@@ -0,0 +1,111 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_DIFF_STORAGE_H
+#define __BLKSNAP_DIFF_STORAGE_H
+
+#include "event_queue.h"
+
+struct blksnap_sectors;
+
+/**
+ * struct diff_region - Describes the location of the chunks data on
+ * difference storage.
+ * @bdev:
+ * The target block device.
+ * @sector:
+ * The sector offset of the region's first sector.
+ * @count:
+ * The count of sectors in the region.
+ */
+struct diff_region {
+ struct block_device *bdev;
+ sector_t sector;
+ sector_t count;
+};
+
+/**
+ * struct diff_storage - Difference storage.
+ *
+ * @kref:
+ * The reference counter.
+ * @lock:
+ * Spinlock allows to guarantee the safety of linked lists.
+ * @storage_bdevs:
+ * List of opened block devices. Blocks for storing snapshot data can be
+ * located on different block devices. So, all opened block devices are
+ * located in this list. Blocks on opened block devices are allocated for
+ * storing the chunks data.
+ * @empty_blocks:
+ * List of empty blocks on storage. This list can be updated while
+ * holding a snapshot. This allows us to dynamically increase the
+ * storage size for these snapshots.
+ * @filled_blocks:
+ * List of filled blocks. When the blocks from the list of empty blocks are filled,
+ * we move them to the list of filled blocks.
+ * @capacity:
+ * Total amount of available storage space.
+ * @filled:
+ * The number of sectors already filled in.
+ * @requested:
+ * The number of sectors already requested from user space.
+ * @low_space_flag:
+ * The flag is set if the number of free regions available in the
+ * difference storage is less than the allowed minimum.
+ * @overflow_flag:
+ * The request for a free region failed due to the absence of free
+ * regions in the difference storage.
+ * @event_queue:
+ * A queue of events to pass events to user space. Diff storage and its
+ * owner can notify its snapshot about events like snapshot overflow,
+ * low free space and snapshot terminated.
+ *
+ * The difference storage manages the regions of block devices that are used
+ * to store the data of the original block devices in the snapshot.
+ * The difference storage is created one per snapshot and is used to store
+ * data from all the original snapshot block devices. At the same time, the
+ * difference storage itself can contain regions on various block devices.
+ */
+struct diff_storage {
+ struct kref kref;
+ spinlock_t lock;
+
+ struct list_head storage_bdevs;
+ struct list_head empty_blocks;
+ struct list_head filled_blocks;
+
+ sector_t capacity;
+ sector_t filled;
+ sector_t requested;
+
+ atomic_t low_space_flag;
+ atomic_t overflow_flag;
+
+ struct event_queue event_queue;
+};
+
+struct diff_storage *diff_storage_new(void);
+void diff_storage_free(struct kref *kref);
+
+static inline void diff_storage_get(struct diff_storage *diff_storage)
+{
+ kref_get(&diff_storage->kref);
+};
+static inline void diff_storage_put(struct diff_storage *diff_storage)
+{
+ if (likely(diff_storage))
+ kref_put(&diff_storage->kref, diff_storage_free);
+};
+
+int diff_storage_append_block(struct diff_storage *diff_storage,
+ const char *bdev_path,
+ struct blksnap_sectors __user *ranges,
+ unsigned int range_count);
+struct diff_region *diff_storage_new_region(struct diff_storage *diff_storage,
+ sector_t count,
+ unsigned int logical_blksz);
+
+static inline void diff_storage_free_region(struct diff_region *region)
+{
+ kfree(region);
+}
+#endif /* __BLKSNAP_DIFF_STORAGE_H */
--
2.20.1
The block device filtering mechanism is an API that allows to attach
block device filters. Block device filters allow perform additional
processing for I/O units.
The idea of handling I/O units on block devices is not new. Back in the
2.6 kernel, there was an undocumented possibility of handling I/O units
by substituting the make_request_fn() function, which belonged to the
request_queue structure. But none of the in-tree kernel modules used
this feature, and it was eliminated in the 5.10 kernel.
The block device filtering mechanism returns the ability to handle I/O
units. It is possible to safely attach filter to a block device "on the
fly" without changing the structure of block devices stack.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Donald Buczek <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
MAINTAINERS | 3 +
block/Makefile | 3 +-
block/bdev.c | 1 +
block/blk-core.c | 27 ++++
block/blk-filter.c | 213 ++++++++++++++++++++++++++++++++
block/blk.h | 11 ++
block/genhd.c | 10 ++
block/ioctl.c | 7 ++
block/partitions/core.c | 10 ++
include/linux/blk-filter.h | 51 ++++++++
include/linux/blk_types.h | 2 +
include/linux/blkdev.h | 1 +
include/uapi/linux/blk-filter.h | 35 ++++++
include/uapi/linux/fs.h | 3 +
14 files changed, 376 insertions(+), 1 deletion(-)
create mode 100644 block/blk-filter.c
create mode 100644 include/linux/blk-filter.h
create mode 100644 include/uapi/linux/blk-filter.h
diff --git a/MAINTAINERS b/MAINTAINERS
index d801b8985b43..8336b6143a71 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3585,6 +3585,9 @@ M: Sergei Shtepa <[email protected]>
L: [email protected]
S: Supported
F: Documentation/block/blkfilter.rst
+F: block/blk-filter.c
+F: include/linux/blk-filter.h
+F: include/uapi/linux/blk-filter.h
BLOCK LAYER
M: Jens Axboe <[email protected]>
diff --git a/block/Makefile b/block/Makefile
index 46ada9dc8bbf..041c54eb0240 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -9,7 +9,8 @@ obj-y := bdev.o fops.o bio.o elevator.o blk-core.o blk-sysfs.o \
blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \
genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o \
- disk-events.o blk-ia-ranges.o early-lookup.o
+ disk-events.o blk-ia-ranges.o early-lookup.o \
+ blk-filter.o
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_BLK_DEV_BSG_COMMON) += bsg.o
diff --git a/block/bdev.c b/block/bdev.c
index 5c46ff107706..369f73b6097a 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -429,6 +429,7 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
return NULL;
}
bdev->bd_disk = disk;
+ bdev->bd_filter = NULL;
return bdev;
}
diff --git a/block/blk-core.c b/block/blk-core.c
index 2ae22bebeb3e..ede04c6ad021 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -18,6 +18,7 @@
#include <linux/blkdev.h>
#include <linux/blk-pm.h>
#include <linux/blk-integrity.h>
+#include <linux/blk-filter.h>
#include <linux/highmem.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
@@ -586,8 +587,24 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
return BLK_STS_OK;
}
+static bool submit_bio_filter(struct bio *bio)
+{
+ if (bio_flagged(bio, BIO_FILTERED))
+ return false;
+
+ bio_set_flag(bio, BIO_FILTERED);
+ return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
+}
+
static void __submit_bio(struct bio *bio)
{
+ /*
+ * If there is a filter driver attached, check if the BIO needs to go to
+ * the filter driver first, which can then pass on the bio or consume it.
+ */
+ if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
+ return;
+
if (unlikely(!blk_crypto_bio_prep(&bio)))
return;
@@ -677,6 +694,15 @@ static void __submit_bio_noacct_mq(struct bio *bio)
current->bio_list = NULL;
}
+/**
+ * submit_bio_noacct_nocheck - re-submit a bio to the block device layer for I/O
+ * from block device filter.
+ * @bio: The bio describing the location in memory and on the device.
+ *
+ * This is a version of submit_bio() that shall only be used for I/O that is
+ * resubmitted to lower level by block device filters. All file systems and
+ * other upper level users of the block layer should use submit_bio() instead.
+ */
void submit_bio_noacct_nocheck(struct bio *bio)
{
blk_cgroup_bio_start(bio);
@@ -704,6 +730,7 @@ void submit_bio_noacct_nocheck(struct bio *bio)
else
__submit_bio_noacct(bio);
}
+EXPORT_SYMBOL_GPL(submit_bio_noacct_nocheck);
/**
* submit_bio_noacct - re-submit a bio to the block device layer for I/O
diff --git a/block/blk-filter.c b/block/blk-filter.c
new file mode 100644
index 000000000000..bf31da6acf67
--- /dev/null
+++ b/block/blk-filter.c
@@ -0,0 +1,213 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#include <linux/blk-filter.h>
+#include <linux/blk-mq.h>
+#include <linux/module.h>
+
+#include "blk.h"
+
+static LIST_HEAD(blkfilters);
+static DEFINE_SPINLOCK(blkfilters_lock);
+
+static inline struct blkfilter_operations *__blkfilter_find(const char *name)
+{
+ struct blkfilter_operations *ops;
+
+ list_for_each_entry(ops, &blkfilters, link)
+ if (strncmp(ops->name, name, BLKFILTER_NAME_LENGTH) == 0)
+ return ops;
+
+ return NULL;
+}
+
+static inline struct blkfilter_operations *blkfilter_find_get(const char *name)
+{
+ struct blkfilter_operations *ops;
+
+ spin_lock(&blkfilters_lock);
+ ops = __blkfilter_find(name);
+ if (ops && !try_module_get(ops->owner))
+ ops = NULL;
+ spin_unlock(&blkfilters_lock);
+
+ return ops;
+}
+
+int blkfilter_ioctl_attach(struct block_device *bdev,
+ struct blkfilter_name __user *argp)
+{
+ struct blkfilter_name name;
+ struct blkfilter_operations *ops;
+ struct blkfilter *flt;
+ int ret;
+
+ if (copy_from_user(&name, argp, sizeof(name)))
+ return -EFAULT;
+
+ ops = blkfilter_find_get(name.name);
+ if (!ops)
+ return -ENOENT;
+
+ ret = freeze_bdev(bdev);
+ if (ret)
+ goto out_put_module;
+ blk_mq_freeze_queue(bdev->bd_queue);
+
+ if (bdev->bd_filter) {
+ if (bdev->bd_filter->ops == ops)
+ ret = -EALREADY;
+ else
+ ret = -EBUSY;
+ goto out_unfreeze;
+ }
+
+ flt = ops->attach(bdev);
+ if (IS_ERR(flt)) {
+ ret = PTR_ERR(flt);
+ goto out_unfreeze;
+ }
+
+ flt->ops = ops;
+ bdev->bd_filter = flt;
+
+out_unfreeze:
+ blk_mq_unfreeze_queue(bdev->bd_queue);
+ thaw_bdev(bdev);
+out_put_module:
+ if (ret)
+ module_put(ops->owner);
+ return ret;
+}
+
+static void __blkfilter_detach(struct block_device *bdev)
+{
+ struct blkfilter *flt = bdev->bd_filter;
+ const struct blkfilter_operations *ops = flt->ops;
+
+ bdev->bd_filter = NULL;
+ ops->detach(flt);
+ module_put(ops->owner);
+}
+
+void blkfilter_detach(struct block_device *bdev)
+{
+ if (bdev->bd_filter) {
+ blk_mq_freeze_queue(bdev->bd_queue);
+ __blkfilter_detach(bdev);
+ blk_mq_unfreeze_queue(bdev->bd_queue);
+ }
+}
+
+int blkfilter_ioctl_detach(struct block_device *bdev,
+ struct blkfilter_name __user *argp)
+{
+ struct blkfilter_name name;
+ int error = 0;
+
+ if (copy_from_user(&name, argp, sizeof(name)))
+ return -EFAULT;
+
+ blk_mq_freeze_queue(bdev->bd_queue);
+ if (!bdev->bd_filter) {
+ error = -ENOENT;
+ goto out_unfreeze;
+ }
+ if (strncmp(bdev->bd_filter->ops->name, name.name,
+ BLKFILTER_NAME_LENGTH)) {
+ error = -EINVAL;
+ goto out_unfreeze;
+ }
+
+ __blkfilter_detach(bdev);
+out_unfreeze:
+ blk_mq_unfreeze_queue(bdev->bd_queue);
+ return error;
+}
+
+int blkfilter_ioctl_ctl(struct block_device *bdev,
+ struct blkfilter_ctl __user *argp)
+{
+ struct blkfilter_ctl ctl;
+ struct blkfilter *flt;
+ int ret;
+
+ if (copy_from_user(&ctl, argp, sizeof(ctl)))
+ return -EFAULT;
+
+ ret = blk_queue_enter(bdev_get_queue(bdev), 0);
+ if (ret)
+ return ret;
+
+ flt = bdev->bd_filter;
+ if (!flt || strncmp(flt->ops->name, ctl.name, BLKFILTER_NAME_LENGTH)) {
+ ret = -ENOENT;
+ goto out_queue_exit;
+ }
+
+ if (!flt->ops->ctl) {
+ ret = -ENOTTY;
+ goto out_queue_exit;
+ }
+
+ ret = flt->ops->ctl(flt, ctl.cmd, u64_to_user_ptr(ctl.opt),
+ &ctl.optlen);
+out_queue_exit:
+ blk_queue_exit(bdev_get_queue(bdev));
+ return ret;
+}
+
+ssize_t blkfilter_show(struct block_device *bdev, char *buf)
+{
+ ssize_t ret = 0;
+
+ blk_mq_freeze_queue(bdev->bd_queue);
+ if (bdev->bd_filter)
+ ret = sprintf(buf, "%s\n", bdev->bd_filter->ops->name);
+ else
+ ret = sprintf(buf, "\n");
+ blk_mq_unfreeze_queue(bdev->bd_queue);
+
+ return ret;
+}
+
+/**
+ * blkfilter_register() - Register block device filter operations
+ * @ops: The operations to register.
+ *
+ * Return:
+ * 0 if succeeded,
+ * -EBUSY if a block device filter with the same name is already
+ * registered.
+ */
+int blkfilter_register(struct blkfilter_operations *ops)
+{
+ struct blkfilter_operations *found;
+ int ret = 0;
+
+ spin_lock(&blkfilters_lock);
+ found = __blkfilter_find(ops->name);
+ if (found)
+ ret = -EBUSY;
+ else
+ list_add_tail(&ops->link, &blkfilters);
+ spin_unlock(&blkfilters_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(blkfilter_register);
+
+/**
+ * blkfilter_unregister() - Unregister block device filter operations
+ * @ops: The operations to unregister.
+ *
+ * Important: before unloading, it is necessary to detach the filter from all
+ * block devices.
+ *
+ */
+void blkfilter_unregister(struct blkfilter_operations *ops)
+{
+ spin_lock(&blkfilters_lock);
+ list_del(&ops->link);
+ spin_unlock(&blkfilters_lock);
+}
+EXPORT_SYMBOL_GPL(blkfilter_unregister);
diff --git a/block/blk.h b/block/blk.h
index 9582fcd0df41..2c14fa938d8c 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -7,6 +7,8 @@
#include <xen/xen.h>
#include "blk-crypto-internal.h"
+struct blkfilter_ctl;
+struct blkfilter_name;
struct elevator_type;
/* Max future timer expiry for timeouts */
@@ -454,6 +456,15 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg);
extern const struct address_space_operations def_blk_aops;
+int blkfilter_ioctl_attach(struct block_device *bdev,
+ struct blkfilter_name __user *argp);
+int blkfilter_ioctl_detach(struct block_device *bdev,
+ struct blkfilter_name __user *argp);
+int blkfilter_ioctl_ctl(struct block_device *bdev,
+ struct blkfilter_ctl __user *argp);
+void blkfilter_detach(struct block_device *bdev);
+ssize_t blkfilter_show(struct block_device *bdev, char *buf);
+
int disk_register_independent_access_ranges(struct gendisk *disk);
void disk_unregister_independent_access_ranges(struct gendisk *disk);
diff --git a/block/genhd.c b/block/genhd.c
index 4e5fd6aaa883..d9aca0797886 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -25,6 +25,7 @@
#include <linux/pm_runtime.h>
#include <linux/badblocks.h>
#include <linux/part_stat.h>
+#include <linux/blk-filter.h>
#include "blk-throttle.h"
#include "blk.h"
@@ -648,6 +649,7 @@ void del_gendisk(struct gendisk *disk)
remove_inode_hash(part->bd_inode);
fsync_bdev(part);
__invalidate_device(part, true);
+ blkfilter_detach(part);
}
mutex_unlock(&disk->open_mutex);
@@ -1033,6 +1035,12 @@ static ssize_t diskseq_show(struct device *dev,
return sprintf(buf, "%llu\n", disk->diskseq);
}
+static ssize_t disk_filter_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return blkfilter_show(dev_to_bdev(dev), buf);
+}
+
static DEVICE_ATTR(range, 0444, disk_range_show, NULL);
static DEVICE_ATTR(ext_range, 0444, disk_ext_range_show, NULL);
static DEVICE_ATTR(removable, 0444, disk_removable_show, NULL);
@@ -1046,6 +1054,7 @@ static DEVICE_ATTR(stat, 0444, part_stat_show, NULL);
static DEVICE_ATTR(inflight, 0444, part_inflight_show, NULL);
static DEVICE_ATTR(badblocks, 0644, disk_badblocks_show, disk_badblocks_store);
static DEVICE_ATTR(diskseq, 0444, diskseq_show, NULL);
+static DEVICE_ATTR(filter, 0444, disk_filter_show, NULL);
#ifdef CONFIG_FAIL_MAKE_REQUEST
ssize_t part_fail_show(struct device *dev,
@@ -1092,6 +1101,7 @@ static struct attribute *disk_attrs[] = {
&dev_attr_events_async.attr,
&dev_attr_events_poll_msecs.attr,
&dev_attr_diskseq.attr,
+ &dev_attr_filter.attr,
#ifdef CONFIG_FAIL_MAKE_REQUEST
&dev_attr_fail.attr,
#endif
diff --git a/block/ioctl.c b/block/ioctl.c
index c7d7d4345edb..170020d1ce0e 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -2,6 +2,7 @@
#include <linux/capability.h>
#include <linux/compat.h>
#include <linux/blkdev.h>
+#include <linux/blk-filter.h>
#include <linux/export.h>
#include <linux/gfp.h>
#include <linux/blkpg.h>
@@ -546,6 +547,12 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode,
return blkdev_pr_preempt(bdev, argp, true);
case IOC_PR_CLEAR:
return blkdev_pr_clear(bdev, argp);
+ case BLKFILTER_ATTACH:
+ return blkfilter_ioctl_attach(bdev, argp);
+ case BLKFILTER_DETACH:
+ return blkfilter_ioctl_detach(bdev, argp);
+ case BLKFILTER_CTL:
+ return blkfilter_ioctl_ctl(bdev, argp);
default:
return -ENOIOCTLCMD;
}
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 87a21942d606..8e2834566c38 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -10,6 +10,7 @@
#include <linux/ctype.h>
#include <linux/vmalloc.h>
#include <linux/raid/detect.h>
+#include <linux/blk-filter.h>
#include "check.h"
static int (*const check_part[])(struct parsed_partitions *) = {
@@ -200,6 +201,12 @@ static ssize_t part_discard_alignment_show(struct device *dev,
return sprintf(buf, "%u\n", bdev_discard_alignment(dev_to_bdev(dev)));
}
+static ssize_t part_filter_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return blkfilter_show(dev_to_bdev(dev), buf);
+}
+
static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
static DEVICE_ATTR(start, 0444, part_start_show, NULL);
static DEVICE_ATTR(size, 0444, part_size_show, NULL);
@@ -208,6 +215,7 @@ static DEVICE_ATTR(alignment_offset, 0444, part_alignment_offset_show, NULL);
static DEVICE_ATTR(discard_alignment, 0444, part_discard_alignment_show, NULL);
static DEVICE_ATTR(stat, 0444, part_stat_show, NULL);
static DEVICE_ATTR(inflight, 0444, part_inflight_show, NULL);
+static DEVICE_ATTR(filter, 0444, part_filter_show, NULL);
#ifdef CONFIG_FAIL_MAKE_REQUEST
static struct device_attribute dev_attr_fail =
__ATTR(make-it-fail, 0644, part_fail_show, part_fail_store);
@@ -222,6 +230,7 @@ static struct attribute *part_attrs[] = {
&dev_attr_discard_alignment.attr,
&dev_attr_stat.attr,
&dev_attr_inflight.attr,
+ &dev_attr_filter.attr,
#ifdef CONFIG_FAIL_MAKE_REQUEST
&dev_attr_fail.attr,
#endif
@@ -284,6 +293,7 @@ static void delete_partition(struct block_device *part)
fsync_bdev(part);
__invalidate_device(part, true);
+ blkfilter_detach(part);
drop_partition(part);
}
diff --git a/include/linux/blk-filter.h b/include/linux/blk-filter.h
new file mode 100644
index 000000000000..0afdb40f3bab
--- /dev/null
+++ b/include/linux/blk-filter.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef _LINUX_BLK_FILTER_H
+#define _LINUX_BLK_FILTER_H
+
+#include <uapi/linux/blk-filter.h>
+
+struct bio;
+struct block_device;
+struct blkfilter_operations;
+
+/**
+ * struct blkfilter - Block device filter.
+ *
+ * @ops: Block device filter operations.
+ *
+ * For each filtered block device, the filter creates a data structure
+ * associated with this device. The data in this structure is specific to the
+ * filter, but it must contain a pointer to the block device filter account.
+ */
+struct blkfilter {
+ const struct blkfilter_operations *ops;
+};
+
+/**
+ * struct blkfilter_operations - Block device filter operations.
+ *
+ * @link: Entry in the global list of filter drivers
+ * (must not be accessed by the driver).
+ * @owner: Module implementing the filter driver.
+ * @name: Name of the filter driver.
+ * @attach: Attach the filter driver to the block device.
+ * @detach: Detach the filter driver from the block device.
+ * @ctl: Send a control command to the filter driver.
+ * @submit_bio: Handle bio submissions to the filter driver.
+ */
+struct blkfilter_operations {
+ struct list_head link;
+ struct module *owner;
+ const char *name;
+ struct blkfilter *(*attach)(struct block_device *bdev);
+ void (*detach)(struct blkfilter *flt);
+ int (*ctl)(struct blkfilter *flt, const unsigned int cmd,
+ __u8 __user *buf, __u32 *plen);
+ bool (*submit_bio)(struct bio *bio);
+};
+
+int blkfilter_register(struct blkfilter_operations *ops);
+void blkfilter_unregister(struct blkfilter_operations *ops);
+
+#endif /* _UAPI_LINUX_BLK_FILTER_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index deb69eeab6bd..5ba313b3b11d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -75,6 +75,7 @@ struct block_device {
* path
*/
struct device bd_device;
+ struct blkfilter *bd_filter;
} __randomize_layout;
#define bdev_whole(_bdev) \
@@ -341,6 +342,7 @@ enum {
BIO_QOS_MERGED, /* but went through rq_qos merge path */
BIO_REMAPPED,
BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */
+ BIO_FILTERED, /* bio has already been filtered */
BIO_FLAG_LAST
};
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f4c339d9dd03..e7a4a4866792 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -842,6 +842,7 @@ void blk_request_module(dev_t devt);
extern int blk_register_queue(struct gendisk *disk);
extern void blk_unregister_queue(struct gendisk *disk);
+void submit_bio_noacct_nocheck(struct bio *bio);
void submit_bio_noacct(struct bio *bio);
struct bio *bio_split_to_limits(struct bio *bio);
diff --git a/include/uapi/linux/blk-filter.h b/include/uapi/linux/blk-filter.h
new file mode 100644
index 000000000000..18885dc1b717
--- /dev/null
+++ b/include/uapi/linux/blk-filter.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef _UAPI_LINUX_BLK_FILTER_H
+#define _UAPI_LINUX_BLK_FILTER_H
+
+#include <linux/types.h>
+
+#define BLKFILTER_NAME_LENGTH 32
+
+/**
+ * struct blkfilter_name - parameter for BLKFILTER_ATTACH and BLKFILTER_DETACH
+ * ioctl.
+ *
+ * @name: Name of block device filter.
+ */
+struct blkfilter_name {
+ __u8 name[BLKFILTER_NAME_LENGTH];
+};
+
+/**
+ * struct blkfilter_ctl - parameter for BLKFILTER_CTL ioctl
+ *
+ * @name: Name of block device filter.
+ * @cmd: The filter-specific operation code of the command.
+ * @optlen: Size of data at @opt.
+ * @opt: Userspace buffer with options.
+ */
+struct blkfilter_ctl {
+ __u8 name[BLKFILTER_NAME_LENGTH];
+ __u32 cmd;
+ __u32 optlen;
+ __u64 opt;
+};
+
+#endif /* _UAPI_LINUX_BLK_FILTER_H */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index b7b56871029c..7904f157b245 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -189,6 +189,9 @@ struct fsxattr {
* A jump here: 130-136 are reserved for zoned block devices
* (see uapi/linux/blkzoned.h)
*/
+#define BLKFILTER_ATTACH _IOWR(0x12, 140, struct blkfilter_name)
+#define BLKFILTER_DETACH _IOWR(0x12, 141, struct blkfilter_name)
+#define BLKFILTER_CTL _IOWR(0x12, 142, struct blkfilter_ctl)
#define BMAP_IOCTL 1 /* obsolete - kept for compatibility */
#define FIBMAP _IO(0x00,1) /* bmap access */
--
2.20.1
The document contains:
* Describes the purpose of the mechanism
* Description of features
* Description of algorithms
* Recommendations about using the module from the user-space side
* Reference to module interface description
Reviewed-by: Bagas Sanjaya <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
Documentation/block/blksnap.rst | 345 ++++++++++++++++++++++++++++++++
Documentation/block/index.rst | 1 +
MAINTAINERS | 6 +
3 files changed, 352 insertions(+)
create mode 100644 Documentation/block/blksnap.rst
diff --git a/Documentation/block/blksnap.rst b/Documentation/block/blksnap.rst
new file mode 100644
index 000000000000..eb20d466d1ca
--- /dev/null
+++ b/Documentation/block/blksnap.rst
@@ -0,0 +1,345 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========================================
+Block Devices Snapshots Module (blksnap)
+========================================
+
+Introduction
+============
+
+At first glance, there is no novelty in the idea of creating snapshots for
+block devices. The Linux kernel already has mechanisms for creating snapshots.
+Device Mapper includes dm-snap, which allows to create snapshots of block
+devices. BTRFS supports snapshots at the file system level. However, both
+of these options have flaws that do not allow to use them as a universal
+tool for creating backups.
+
+The main properties that a backup tool should have are:
+
+- Simplicity and versatility of use
+- Reliability
+- Minimal consumption of system resources during backup
+- Minimal time required for recovery or replication of the entire system
+
+Taking above properties into account, blksnap module features:
+
+- Change tracker
+- Snapshots at the block device level
+- Dynamic allocation of space for storing differences
+- Snapshot overflow resistance
+- Coherent snapshot of multiple block devices
+
+Features
+========
+
+Change tracker
+--------------
+
+The change tracker allows to determine which blocks were changed during the
+time between the last snapshot created and any of the previous snapshots.
+With a map of changes, it is enough to copy only the changed blocks, and no
+need to reread the entire block device completely. The change tracker allows
+to implement the logic of both incremental and differential backups.
+Incremental backup is critical for large file repositories whose size can be
+hundreds of terabytes and whose full backup time can take more than a day.
+On such servers, the use of backup tools without a change tracker becomes
+practically impossible.
+
+Snapshot at the block device level
+----------------------------------
+
+A snapshot at the block device level allows to simplify the backup algorithm
+and reduce consumption of system resources. It also allows to perform linear
+reading of disk space directly, which allows to achieve maximum reading speed
+with minimal use of processor time. At the same time, the versatility of
+creating snapshots for any block device is achieved, regardless of the file
+system located on it. The exceptions are BTRFS, ZFS and cluster file systems.
+
+Dynamic allocation of storage space for differences
+---------------------------------------------------
+
+To store differences, the module does not require a pre-reserved block
+device range. A range of sectors can be allocated on any block device
+immediately before creating a snapshot in individual files on the file
+system. In addition, the size of the difference storage can be increased
+after the snapshot is created by adding new sector ranges on block devices.
+Sector ranges can be allocated on any block devices of the system, including
+those on which the snapshot was created. A shared difference storage for
+all images of snapshot block devices allows to optimize the use of disk space.
+
+Snapshot overflow resistance
+----------------------------
+
+To create images of snapshots of block devices, the module stores blocks
+of the original block device that have been changed since the snapshot
+was taken. To do this, the module handles write requests and reads blocks
+that need to be overwritten. This algorithm guarantees safety of the data
+of the original block device in the event of an overflow of the snapshot,
+and even in the case of unpredictable critical errors. If a problem occurs
+during backup, the difference storage is released, the snapshot is closed,
+no backup is created, but the server continues to work.
+
+Coherent snapshot of multiple block devices
+-------------------------------------------
+
+A snapshot is created simultaneously for all block devices for which a backup
+is being created, ensuring their coherent state.
+
+
+Algorithms
+==========
+
+Overview
+--------
+
+The blksnap module is a block-level filter. It handles all write I/O units.
+The filter is attached to the block device when the snapshot is created
+for the first time. The change tracker marks all overwritten blocks.
+Information about the history of changes on the block device is available
+while holding the snapshot. The module reads the blocks that need to be
+overwritten and stores them in the difference storage. When reading from
+a snapshot image, reading is performed either from the original device or
+from the difference storage.
+
+Change tracking
+---------------
+
+A change tracker map is created for each block device. One byte
+of this map corresponds to one block. The block size is set by the
+``tracking_block_minimum_shift`` and ``tracking_block_maximum_count``
+module parameters. The ``tracking_block_minimum_shift`` parameter limits
+the minimum block size for tracking, while ``tracking_block_maximum_count``
+defines the maximum allowed number of blocks. The size of the change tracker
+block is determined depending on the size of the block device when adding
+a tracking device, that is, when the snapshot is taken for the first time.
+The block size must be a power of two. The ``tracking_block_maximum_shift``
+module parameter allows to limit the maximum block size for tracking. If the
+block size reaches the allowable limit, the number of blocks will exceed the
+``tracking_block_maximum_count`` parameter.
+
+The byte of the change map stores a number from 0 to 255. This is the
+snapshot number, since the creation of which there have been changes in
+the block. Each time a snapshot is created, the number of the current
+snapshot is increased by one. This number is written to the cell of the
+change map when writing to the block. Thus, knowing the number of one of
+the previous snapshots and the number of the last snapshot, one can determine
+from the change map which blocks have been changed. When the number of the
+current change reaches the maximum allowed value for the map of 255, at the
+time when the next snapshot is created, the map of changes is reset to zero,
+and the number of the current snapshot is assigned the value 1. The change
+tracker is reset, and a new UUID is generated - a unique identifier of the
+snapshot generation. The snapshot generation identifier allows to identify
+that a change tracking reset has been performed.
+
+The change map has two copies. One copy is active, it tracks the current
+changes on the block device. The second copy is available for reading
+while the snapshot is being held, and contains the history up to the moment
+the snapshot is taken. Copies are synchronized at the moment of snapshot
+creation. After the snapshot is released, a second copy of the map is not
+needed, but it is not released, so as not to allocate memory for it again
+the next time the snapshot is created.
+
+Copy on write
+-------------
+
+Data is copied in blocks, or rather in chunks. The term "chunk" is used to
+avoid confusion with change tracker blocks and I/O blocks. In addition,
+the "chunk" in the blksnap module means about the same as the "chunk" in
+the dm-snap module.
+
+The size of the chunk is determined by the ``chunk_minimum_shift`` and
+``chunk_maximum_count`` module parameters. The ``chunk_minimum_shift``
+parameter limits the minimum size of the chunk, while ``chunk_maximum_count``
+defines the maximum allowed number of chunks. The size of the chunk is
+determined depending on the size of the block device at the time of taking the
+snapshot. The size of the chunk must be a power of two. The module parameter
+``chunk_maximum_shift`` allows to limit the maximum chunk size. If the chunk
+size reaches the allowable limit, the number of chunks will exceed the
+``chunk_maximum_count`` parameter.
+
+One chunk is described by the ``struct chunk`` structure. An array of structures
+is created for each block device. The structure contains all the necessary
+information to copy the chunks data from the original block device to the
+difference storage. This information allows to describe the snapshot image.
+A semaphore is located in the structure, which allows synchronization of threads
+accessing the chunk.
+
+The block level has a feature. If a read I/O unit was sent, and a write I/O
+unit was sent after it, then a write can be performed first, and only then
+a read. Therefore, the copy-on-write algorithm is executed synchronously.
+If a write request is handled, the execution of this I/O unit will be
+delayed until the overwritten chunks are copied to the difference storage.
+But if, when handling a write I/O unit, it turns out that the recorded range
+of sectors has already been copied to the difference storage, then the I/O
+unit is simply passed.
+
+This algorithm allows to efficiently perform backups of systems that run
+Round Robin Database. Such databases can be overwritten several times during
+the system backup. Of course, the value of a backup of the RRD monitoring
+system data can be questioned. However, it is often a task to make a backup
+of the entire enterprise infrastructure in order to restore or replicate it
+entirely in case of problems.
+
+There is also a flaw in the algorithm. When overwriting at least one sector,
+an entire chunk is copied. Thus, a situation of rapid filling of the difference
+storage when writing data to a block device in small portions in random order
+is possible. This situation is possible in case of strong fragmentation of
+data on the file system. But it must be borne in mind that with such data
+fragmentation, performance of systems usually degrades greatly. So, this
+problem does not occur on real servers, although it can easily be created
+by artificial tests.
+
+Difference storage
+------------------
+
+The difference storage is a pool of disk space areas, and it is shared with
+all block devices in the snapshot. Therefore, there is no need to divide
+the difference storage area between block devices, and the difference storage
+itself can be located on different block devices.
+
+There is no need to allocate a large disk space immediately before creating
+a snapshot. Even while the snapshot is being held, the difference storage
+can be expanded. It is enough to have free space on the file system.
+
+Areas of disk space can be allocated on the file system using fallocate(),
+and the file location can be requested using Fiemap Ioctl or Fibmap Ioctl.
+Unfortunately, not all file systems support these mechanisms, but the most
+common XFS, EXT4 and BTRFS file systems support it. BTRFS requires additional
+conversion of virtual offsets to physical ones.
+
+While holding the snapshot, the user process can poll the status of the module.
+When free space in the difference storage is reduced to a threshold value, the
+module generates an event about it. The user process can prepare a new area
+and pass it to the module to expand the difference storage. The threshold
+value is determined as half of the value of the ``diff_storage_minimum``
+module parameter.
+
+If free space in the difference storage runs out, an event is generated about
+the overflow of the snapshot. Such a snapshot is considered corrupted, and
+read I/O units to snapshot images will be terminated with an error code.
+The difference storage stores outdated data required for snapshot images,
+so when the snapshot is overflowed, the backup process is interrupted,
+but the system maintains its operability without data loss.
+
+Performing I/O for a snapshot image
+-----------------------------------
+
+To read snapshot data, when taking a snapshot, block devices of snapshot images
+are created. The snapshot image block devices support the write operation.
+This allows to perform additional data preparation on the file system before
+creating a backup.
+
+To process the I/O unit, clones of the I/O unit are created, which redirect
+the I/O unit either to the original block device or to the difference storage.
+When processing of cloned I/O units is completed, the original I/O unit is
+marked as completed too.
+
+An I/O unit can be partially processed without accessing to block devices if
+the I/O unit refers to a chunk that is in the queue for storing to the
+difference storage. In this case, the data is read or written in a buffer in
+memory.
+
+If, when processing the write I/O unit, it turns out that the data of the
+referred chunk has not yet been stored to the difference storage or has not
+even been read from the original device, then an I/O unit to read data from the
+original device is initiated beforehand. After the reading from original device
+is performed, their data from the I/O unit is partially overwritten directly in
+the buffer of the chunk in memory, and the chunk is scheduled to be saved to the
+difference storage.
+
+How to use
+==========
+
+Depending on the needs and the selected license, you can choose different
+options for managing the module:
+
+- Using ioctl directly
+- Using a static C++ library
+- Using the blksnap console tool
+
+Using a BLKFILTER_CTL for block device
+--------------------------------------
+
+BLKFILTER_CTL allows to send a filter-specific command to the filter on block
+device and get the result of its execution. The module provides the
+``include/uapi/blksnap.h`` header file with a description of the commands and
+their data structures.
+
+1. ``blkfilter_ctl_blksnap_cbtinfo`` allows to get information from the
+ change tracker.
+2. ``blkfilter_ctl_blksnap_cbtmap`` reads the change tracker table. If a write
+ operation was performed for the snapshot, then the change tracker takes this
+ into account. Therefore, it is necessary to receive tracker data after write
+ operations have been completed.
+3. ``blkfilter_ctl_blksnap_cbtdirty`` mark blocks as changed in the change
+ tracker table. This is necessary if post-processing is performed after the
+ backup is created, which changes the backup blocks.
+4. ``blkfilter_ctl_blksnap_snapshotadd`` adds a block device to the snapshot.
+5. ``blkfilter_ctl_blksnap_snapshotinfo`` allows to get the name of the snapshot
+ image block device and the presence of an error.
+
+Using ioctl
+-----------
+
+Using a BLKFILTER_CTL ioctl does not allow to fully implement the management of
+the blksnap module. A control file ``blksnap-control`` is created to manage
+snapshots. The control commands are also described in the file
+``include/uapi/blksnap.h``.
+
+1. ``blksnap_ioctl_version`` get the version number.
+2. ``blk_snap_ioctl_snapshot_create`` initiates the snapshot creation process.
+3. ``blk_snap_ioctl_snapshot_append_storage`` add the range of blocks to
+ difference storage.
+4. ``blk_snap_ioctl_snapshot_take`` creates block devices of block device
+ snapshot images.
+5. ``blk_snap_ioctl_snapshot_collect`` collect all created snapshots.
+6. ``blk_snap_ioctl_snapshot_wait_event`` allows to track the status of
+ snapshots and receive events about the requirement to expand the difference
+ storage or about snapshot overflow.
+7. ``blk_snap_ioctl_snapshot_destroy`` releases the snapshot.
+
+Static C++ library
+------------------
+
+The [#userspace_libs]_ library was created primarily to simplify creation of
+tests in C++, and it is also a good example of using the module interface.
+When creating applications, direct use of control calls is preferable.
+However, the library can be used in an application with a GPL-2+ license,
+or a library with an LGPL-2+ license can be created, with which even a
+proprietary application can be dynamically linked.
+
+blksnap console tool
+--------------------
+
+The blksnap [#userspace_tools]_ console tool allows to control the module from
+the command line. The tool contains detailed built-in help. To get list of
+commands with usage description, see ``blksnap --help`` command. The ``blksnap
+<command name> --help`` command allows to get detailed information about the
+parameters of each command call. This option may be convenient when creating
+proprietary software, as it allows not to compile with the open source code.
+At the same time, the blksnap tool can be used for creating backup scripts.
+For example, rsync can be called to synchronize files on the file system of
+the mounted snapshot image and files in the archive on a file system that
+supports compression.
+
+Tests
+-----
+
+A set of tests was created for regression testing [#userspace_tests]_.
+Tests with simple algorithms that use the ``blksnap`` console tool to
+control the module are written in Bash. More complex testing algorithms
+are implemented in C++.
+
+References
+==========
+
+.. [#userspace_libs] https://github.com/veeam/blksnap/tree/stable-v2.0/lib
+
+.. [#userspace_tools] https://github.com/veeam/blksnap/tree/stable-v2.0/tools
+
+.. [#userspace_tests] https://github.com/veeam/blksnap/tree/stable-v2.0/tests
+
+Module interface description
+============================
+
+.. kernel-doc:: include/uapi/linux/blksnap.h
diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst
index e9712f72cd6d..696ff150c6b7 100644
--- a/Documentation/block/index.rst
+++ b/Documentation/block/index.rst
@@ -11,6 +11,7 @@ Block
biovecs
blk-mq
blkfilter
+ blksnap
cmdline-partition
data-integrity
deadline-iosched
diff --git a/MAINTAINERS b/MAINTAINERS
index 8336b6143a71..c7dabe785cf1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3589,6 +3589,12 @@ F: block/blk-filter.c
F: include/linux/blk-filter.h
F: include/uapi/linux/blk-filter.h
+BLOCK DEVICE SNAPSHOTS MODULE
+M: Sergei Shtepa <[email protected]>
+L: [email protected]
+S: Supported
+F: Documentation/block/blksnap.rst
+
BLOCK LAYER
M: Jens Axboe <[email protected]>
L: [email protected]
--
2.20.1
Provides transmission of events from the difference storage to the user
process. Only two events are currently defined. The first is that there
are few free regions in the difference storage. The second is that the
request for a free region for storing differences failed with an error,
since there are no more free regions left in the difference storage
(the snapshot overflow state).
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
drivers/block/blksnap/event_queue.c | 87 +++++++++++++++++++++++++++++
drivers/block/blksnap/event_queue.h | 65 +++++++++++++++++++++
2 files changed, 152 insertions(+)
create mode 100644 drivers/block/blksnap/event_queue.c
create mode 100644 drivers/block/blksnap/event_queue.h
diff --git a/drivers/block/blksnap/event_queue.c b/drivers/block/blksnap/event_queue.c
new file mode 100644
index 000000000000..5d3bedd63c10
--- /dev/null
+++ b/drivers/block/blksnap/event_queue.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-event_queue: " fmt
+
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include "event_queue.h"
+
+void event_queue_init(struct event_queue *event_queue)
+{
+ INIT_LIST_HEAD(&event_queue->list);
+ spin_lock_init(&event_queue->lock);
+ init_waitqueue_head(&event_queue->wq_head);
+}
+
+void event_queue_done(struct event_queue *event_queue)
+{
+ struct event *event;
+
+ spin_lock(&event_queue->lock);
+ while (!list_empty(&event_queue->list)) {
+ event = list_first_entry(&event_queue->list, struct event,
+ link);
+ list_del(&event->link);
+ event_free(event);
+ }
+ spin_unlock(&event_queue->lock);
+}
+
+int event_gen(struct event_queue *event_queue, gfp_t flags, int code,
+ const void *data, int data_size)
+{
+ struct event *event;
+
+ event = kzalloc(sizeof(struct event) + data_size + 1, flags);
+ if (!event)
+ return -ENOMEM;
+
+ event->time = ktime_get();
+ event->code = code;
+ event->data_size = data_size;
+ memcpy(event->data, data, data_size);
+
+ pr_debug("Generate event: time=%lld code=%d data_size=%d\n",
+ event->time, event->code, event->data_size);
+
+ spin_lock(&event_queue->lock);
+ list_add_tail(&event->link, &event_queue->list);
+ spin_unlock(&event_queue->lock);
+
+ wake_up(&event_queue->wq_head);
+ return 0;
+}
+
+struct event *event_wait(struct event_queue *event_queue,
+ unsigned long timeout_ms)
+{
+ int ret;
+
+ ret = wait_event_interruptible_timeout(event_queue->wq_head,
+ !list_empty(&event_queue->list), timeout_ms);
+ if (ret >= 0) {
+ struct event *event = ERR_PTR(-ENOENT);
+
+ spin_lock(&event_queue->lock);
+ if (!list_empty(&event_queue->list)) {
+ event = list_first_entry(&event_queue->list,
+ struct event, link);
+ list_del(&event->link);
+ }
+ spin_unlock(&event_queue->lock);
+
+ if (IS_ERR(event))
+ pr_debug("Queue list is empty, timeout_ms=%lu\n", timeout_ms);
+ else
+ pr_debug("Event received: time=%lld code=%d\n",
+ event->time, event->code);
+ return event;
+ }
+ if (ret == -ERESTARTSYS) {
+ pr_debug("event waiting interrupted\n");
+ return ERR_PTR(-EINTR);
+ }
+
+ pr_err("Failed to wait event. errno=%d\n", abs(ret));
+ return ERR_PTR(ret);
+}
diff --git a/drivers/block/blksnap/event_queue.h b/drivers/block/blksnap/event_queue.h
new file mode 100644
index 000000000000..7f1209bbfc98
--- /dev/null
+++ b/drivers/block/blksnap/event_queue.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_EVENT_QUEUE_H
+#define __BLKSNAP_EVENT_QUEUE_H
+
+#include <linux/types.h>
+#include <linux/ktime.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+
+/**
+ * struct event - An event to be passed to the user space.
+ * @link:
+ * The list header allows to combine events from the queue.
+ * @time:
+ * A timestamp indicates when an event occurred.
+ * @code:
+ * Event code.
+ * @data_size:
+ * The number of bytes in the event data array.
+ * @data:
+ * An array of event data. The allowed size of the array is limited
+ * so that the size of the entire structure does not exceed PAGE_SIZE.
+ *
+ * Events can be different, so they contain different data. The size of the
+ * data array is not defined exactly, but it has limitations. The size of
+ * the event structure may exceed the PAGE_SIZE.
+ */
+struct event {
+ struct list_head link;
+ ktime_t time;
+ int code;
+ int data_size;
+ char data[];
+};
+
+/**
+ * struct event_queue - A queue of &struct event.
+ * @list:
+ * Linked list for storing events.
+ * @lock:
+ * Spinlock allows to guarantee safety of the linked list.
+ * @wq_head:
+ * A wait queue allows to put a user thread in a waiting state until
+ * an event appears in the linked list.
+ */
+struct event_queue {
+ struct list_head list;
+ spinlock_t lock;
+ struct wait_queue_head wq_head;
+};
+
+void event_queue_init(struct event_queue *event_queue);
+void event_queue_done(struct event_queue *event_queue);
+
+int event_gen(struct event_queue *event_queue, gfp_t flags, int code,
+ const void *data, int data_size);
+struct event *event_wait(struct event_queue *event_queue,
+ unsigned long timeout_ms);
+static inline void event_free(struct event *event)
+{
+ kfree(event);
+};
+#endif /* __BLKSNAP_EVENT_QUEUE_H */
--
2.20.1
The struck snapshot combines block devices, for which a snapshot is
created, block devices of their snapshot images, as well as a difference
storage.
There may be several snapshots at the same time, but they should not
contain common block devices. This can be used for cases when backup is
scheduled once an hour for some block devices, and once a day for
others, and once a week for others. In this case, it is possible that
three snapshots are used at the same time.
Snapshot images of block devices provides the read and write operations.
They redirect I/O units to the original block device or to differential
storage devices.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
drivers/block/blksnap/snapimage.c | 124 +++++++++
drivers/block/blksnap/snapimage.h | 10 +
drivers/block/blksnap/snapshot.c | 443 ++++++++++++++++++++++++++++++
drivers/block/blksnap/snapshot.h | 68 +++++
4 files changed, 645 insertions(+)
create mode 100644 drivers/block/blksnap/snapimage.c
create mode 100644 drivers/block/blksnap/snapimage.h
create mode 100644 drivers/block/blksnap/snapshot.c
create mode 100644 drivers/block/blksnap/snapshot.h
diff --git a/drivers/block/blksnap/snapimage.c b/drivers/block/blksnap/snapimage.c
new file mode 100644
index 000000000000..3ed46d5e8d92
--- /dev/null
+++ b/drivers/block/blksnap/snapimage.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+/*
+ * Present the snapshot image as a block device.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME "-image: " fmt
+#include <linux/slab.h>
+#include <linux/cdrom.h>
+#include <linux/blk-mq.h>
+#include <linux/build_bug.h>
+#include <uapi/linux/blksnap.h>
+#include "snapimage.h"
+#include "tracker.h"
+#include "chunk.h"
+#include "cbt_map.h"
+
+/*
+ * The snapshot supports write operations. This allows for example to delete
+ * some files from the file system before backing up the volume. The data can
+ * be stored only in the difference storage. Therefore, before partially
+ * overwriting this data, it should be read from the original block device.
+ */
+static void snapimage_submit_bio(struct bio *bio)
+{
+ struct tracker *tracker = bio->bi_bdev->bd_disk->private_data;
+ struct diff_area *diff_area = tracker->diff_area;
+
+ /*
+ * The diff_area is not blocked from releasing now, because
+ * snapimage_free() is calling before diff_area_put() in
+ * tracker_release_snapshot().
+ */
+ if (diff_area_is_corrupted(diff_area)) {
+ bio_io_error(bio);
+ return;
+ }
+
+ /*
+ * The change tracking table should represent that the snapshot data
+ * has been changed.
+ */
+ if (op_is_write(bio_op(bio)))
+ cbt_map_set_both(tracker->cbt_map, bio->bi_iter.bi_sector,
+ bio_sectors(bio));
+
+ while (bio->bi_iter.bi_size) {
+ if (!diff_area_submit_chunk(diff_area, bio)) {
+ bio_io_error(bio);
+ return;
+ }
+ }
+ bio_endio(bio);
+}
+
+static const struct block_device_operations bd_ops = {
+ .owner = THIS_MODULE,
+ .submit_bio = snapimage_submit_bio,
+};
+
+void snapimage_free(struct tracker *tracker)
+{
+ struct gendisk *disk = tracker->snap_disk;
+
+ if (!disk)
+ return;
+
+ pr_debug("Snapshot image disk %s delete\n", disk->disk_name);
+ del_gendisk(disk);
+ put_disk(disk);
+
+ tracker->snap_disk = NULL;
+}
+
+int snapimage_create(struct tracker *tracker)
+{
+ int ret = 0;
+ dev_t dev_id = tracker->dev_id;
+ struct gendisk *disk;
+
+ pr_info("Create snapshot image device for original device [%u:%u]\n",
+ MAJOR(dev_id), MINOR(dev_id));
+
+ disk = blk_alloc_disk(NUMA_NO_NODE);
+ if (!disk) {
+ pr_err("Failed to allocate disk\n");
+ return -ENOMEM;
+ }
+
+ disk->flags = GENHD_FL_NO_PART;
+ disk->fops = &bd_ops;
+ disk->private_data = tracker;
+ set_capacity(disk, tracker->cbt_map->device_capacity);
+ ret = snprintf(disk->disk_name, DISK_NAME_LEN, "%s_%d:%d",
+ BLKSNAP_IMAGE_NAME, MAJOR(dev_id), MINOR(dev_id));
+ if (ret < 0) {
+ pr_err("Unable to set disk name for snapshot image device: invalid device id [%d:%d]\n",
+ MAJOR(dev_id), MINOR(dev_id));
+ ret = -EINVAL;
+ goto fail_cleanup_disk;
+ }
+ pr_debug("Snapshot image disk name [%s]\n", disk->disk_name);
+
+ blk_queue_physical_block_size(disk->queue,
+ tracker->diff_area->physical_blksz);
+ blk_queue_logical_block_size(disk->queue,
+ tracker->diff_area->logical_blksz);
+
+ ret = add_disk(disk);
+ if (ret) {
+ pr_err("Failed to add disk [%s] for snapshot image device\n",
+ disk->disk_name);
+ goto fail_cleanup_disk;
+ }
+ tracker->snap_disk = disk;
+
+ pr_debug("Image block device [%d:%d] has been created\n",
+ disk->major, disk->first_minor);
+
+ return 0;
+
+fail_cleanup_disk:
+ put_disk(disk);
+ return ret;
+}
diff --git a/drivers/block/blksnap/snapimage.h b/drivers/block/blksnap/snapimage.h
new file mode 100644
index 000000000000..cb2df7019eb8
--- /dev/null
+++ b/drivers/block/blksnap/snapimage.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_SNAPIMAGE_H
+#define __BLKSNAP_SNAPIMAGE_H
+
+struct tracker;
+
+void snapimage_free(struct tracker *tracker);
+int snapimage_create(struct tracker *tracker);
+#endif /* __BLKSNAP_SNAPIMAGE_H */
diff --git a/drivers/block/blksnap/snapshot.c b/drivers/block/blksnap/snapshot.c
new file mode 100644
index 000000000000..4d63f38f1ec4
--- /dev/null
+++ b/drivers/block/blksnap/snapshot.c
@@ -0,0 +1,443 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-snapshot: " fmt
+
+#include <linux/slab.h>
+#include <linux/sched/mm.h>
+#include <linux/build_bug.h>
+#include <uapi/linux/blksnap.h>
+#include "snapshot.h"
+#include "tracker.h"
+#include "diff_storage.h"
+#include "diff_area.h"
+#include "snapimage.h"
+#include "cbt_map.h"
+
+static LIST_HEAD(snapshots);
+static DECLARE_RWSEM(snapshots_lock);
+
+static void snapshot_free(struct kref *kref)
+{
+ struct snapshot *snapshot = container_of(kref, struct snapshot, kref);
+
+ pr_info("Release snapshot %pUb\n", &snapshot->id);
+
+
+ while (!list_empty(&snapshot->trackers)) {
+ struct tracker *tracker;
+
+ tracker = list_first_entry(&snapshot->trackers, struct tracker,
+ link);
+ list_del_init(&tracker->link);
+ tracker_release_snapshot(tracker);
+ tracker_put(tracker);
+ }
+
+ diff_storage_put(snapshot->diff_storage);
+ snapshot->diff_storage = NULL;
+
+ kfree(snapshot);
+}
+
+static inline void snapshot_get(struct snapshot *snapshot)
+{
+ kref_get(&snapshot->kref);
+};
+static inline void snapshot_put(struct snapshot *snapshot)
+{
+ if (likely(snapshot))
+ kref_put(&snapshot->kref, snapshot_free);
+};
+
+static struct snapshot *snapshot_new(void)
+{
+ int ret;
+ struct snapshot *snapshot = NULL;
+
+ snapshot = kzalloc(sizeof(struct snapshot), GFP_KERNEL);
+ if (!snapshot)
+ return ERR_PTR(-ENOMEM);
+
+ snapshot->diff_storage = diff_storage_new();
+ if (!snapshot->diff_storage) {
+ ret = -ENOMEM;
+ goto fail_free_snapshot;
+ }
+
+ INIT_LIST_HEAD(&snapshot->link);
+ kref_init(&snapshot->kref);
+ uuid_gen(&snapshot->id);
+ init_rwsem(&snapshot->rw_lock);
+ snapshot->is_taken = false;
+ INIT_LIST_HEAD(&snapshot->trackers);
+
+ return snapshot;
+
+fail_free_snapshot:
+ kfree(snapshot);
+
+ return ERR_PTR(ret);
+}
+
+void __exit snapshot_done(void)
+{
+ struct snapshot *snapshot;
+
+ pr_debug("Cleanup snapshots\n");
+ do {
+ down_write(&snapshots_lock);
+ snapshot = list_first_entry_or_null(&snapshots, struct snapshot,
+ link);
+ if (snapshot)
+ list_del(&snapshot->link);
+ up_write(&snapshots_lock);
+
+ snapshot_put(snapshot);
+ } while (snapshot);
+}
+
+
+int snapshot_create(uuid_t *id)
+{
+ struct snapshot *snapshot = NULL;
+
+ snapshot = snapshot_new();
+ if (IS_ERR(snapshot)) {
+ pr_err("Unable to create snapshot: failed to allocate snapshot structure\n");
+ return PTR_ERR(snapshot);
+ }
+
+ uuid_copy(id, &snapshot->id);
+
+ down_write(&snapshots_lock);
+ list_add_tail(&snapshot->link, &snapshots);
+ up_write(&snapshots_lock);
+
+ pr_info("Snapshot %pUb was created\n", id);
+ return 0;
+}
+
+static struct snapshot *snapshot_get_by_id(const uuid_t *id)
+{
+ struct snapshot *snapshot = NULL;
+ struct snapshot *s;
+
+ down_read(&snapshots_lock);
+ if (list_empty(&snapshots))
+ goto out;
+
+ list_for_each_entry(s, &snapshots, link) {
+ if (uuid_equal(&s->id, id)) {
+ snapshot = s;
+ snapshot_get(snapshot);
+ break;
+ }
+ }
+out:
+ up_read(&snapshots_lock);
+ return snapshot;
+}
+
+int snapshot_add_device(const uuid_t *id, struct tracker *tracker)
+{
+ int ret = 0;
+ struct snapshot *snapshot = NULL;
+
+ snapshot = snapshot_get_by_id(id);
+ if (!snapshot)
+ return -ESRCH;
+
+ down_write(&snapshot->rw_lock);
+ if (!list_empty(&snapshot->trackers)) {
+ struct tracker *tr;
+
+ list_for_each_entry(tr, &snapshot->trackers, link) {
+ if ((tr == tracker) ||
+ (tr->dev_id == tracker->dev_id)) {
+ ret = -EALREADY;
+ break;
+ }
+ }
+ }
+ if (!ret) {
+ if (list_empty(&tracker->link)) {
+ tracker_get(tracker);
+ list_add_tail(&tracker->link, &snapshot->trackers);
+ } else
+ ret = -EBUSY;
+ }
+ up_write(&snapshot->rw_lock);
+
+ snapshot_put(snapshot);
+
+ return ret;
+}
+
+int snapshot_destroy(const uuid_t *id)
+{
+ struct snapshot *snapshot = NULL;
+
+ pr_info("Destroy snapshot %pUb\n", id);
+ down_write(&snapshots_lock);
+ if (!list_empty(&snapshots)) {
+ struct snapshot *s = NULL;
+
+ list_for_each_entry(s, &snapshots, link) {
+ if (uuid_equal(&s->id, id)) {
+ snapshot = s;
+ list_del(&snapshot->link);
+ break;
+ }
+ }
+ }
+ up_write(&snapshots_lock);
+
+ if (!snapshot) {
+ pr_err("Unable to destroy snapshot: cannot find snapshot by id %pUb\n",
+ id);
+ return -ENODEV;
+ }
+ snapshot_put(snapshot);
+
+ return 0;
+}
+
+int snapshot_append_storage(const uuid_t *id, const char *bdev_path,
+ struct blksnap_sectors __user *ranges,
+ unsigned int range_count)
+{
+ int ret = 0;
+ struct snapshot *snapshot;
+
+ snapshot = snapshot_get_by_id(id);
+ if (!snapshot)
+ return -ESRCH;
+
+ ret = diff_storage_append_block(snapshot->diff_storage, bdev_path,
+ ranges, range_count);
+ snapshot_put(snapshot);
+ return ret;
+}
+
+static int snapshot_take_trackers(struct snapshot *snapshot)
+{
+ int ret = 0;
+ struct tracker *tracker;
+
+ down_write(&snapshot->rw_lock);
+
+ if (list_empty(&snapshot->trackers)) {
+ ret = -ENODEV;
+ goto fail;
+ }
+
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ struct diff_area *diff_area =
+ diff_area_new(tracker->dev_id, snapshot->diff_storage);
+
+ if (IS_ERR(diff_area)) {
+ ret = PTR_ERR(diff_area);
+ break;
+ }
+ tracker->diff_area = diff_area;
+ }
+ if (ret)
+ goto fail;
+
+ /* Try to flush and freeze file system on each original block device. */
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ if (freeze_bdev(tracker->diff_area->orig_bdev))
+ pr_warn("Failed to freeze device [%u:%u]\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+ else {
+ pr_debug("Device [%u:%u] was frozen\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+ }
+ }
+
+ /*
+ * Take snapshot - switch CBT tables and enable COW logic
+ * for each tracker.
+ */
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ ret = tracker_take_snapshot(tracker);
+ if (ret) {
+ pr_err("Unable to take snapshot: failed to capture snapshot %pUb\n",
+ &snapshot->id);
+ break;
+ }
+ }
+
+ if (!ret)
+ snapshot->is_taken = true;
+
+ /* Thaw file systems on original block devices. */
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ if (thaw_bdev(tracker->diff_area->orig_bdev))
+ pr_warn("Failed to thaw device [%u:%u]\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+ else
+ pr_debug("Device [%u:%u] was unfrozen\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+ }
+fail:
+ if (ret) {
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ if (tracker->diff_area) {
+ diff_area_put(tracker->diff_area);
+ tracker->diff_area = NULL;
+ }
+ }
+ }
+ up_write(&snapshot->rw_lock);
+ return ret;
+}
+
+/*
+ * Sometimes a snapshot is in the state of corrupt immediately
+ * after it is taken.
+ */
+static int snapshot_check_trackers(struct snapshot *snapshot)
+{
+ int ret = 0;
+ struct tracker *tracker;
+
+ down_read(&snapshot->rw_lock);
+
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ if (unlikely(diff_area_is_corrupted(tracker->diff_area))) {
+ pr_err("Unable to create snapshot for device [%u:%u]: diff area is corrupted\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+ ret = -EFAULT;
+ break;
+ }
+ }
+
+ up_read(&snapshot->rw_lock);
+
+ return ret;
+}
+
+/*
+ * Create all image block devices.
+ */
+static int snapshot_take_images(struct snapshot *snapshot)
+{
+ int ret = 0;
+ struct tracker *tracker;
+
+ down_write(&snapshot->rw_lock);
+
+ list_for_each_entry(tracker, &snapshot->trackers, link) {
+ ret = snapimage_create(tracker);
+
+ if (ret) {
+ pr_err("Failed to create snapshot image for device [%u:%u] with error=%d\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id),
+ ret);
+ break;
+ }
+ }
+
+ up_write(&snapshot->rw_lock);
+ return ret;
+}
+
+static int snapshot_release_trackers(struct snapshot *snapshot)
+{
+ int ret = 0;
+ struct tracker *tracker;
+
+ down_write(&snapshot->rw_lock);
+
+ list_for_each_entry(tracker, &snapshot->trackers, link)
+ tracker_release_snapshot(tracker);
+
+ up_write(&snapshot->rw_lock);
+ return ret;
+}
+
+int snapshot_take(const uuid_t *id)
+{
+ int ret = 0;
+ struct snapshot *snapshot;
+
+ snapshot = snapshot_get_by_id(id);
+ if (!snapshot)
+ return -ESRCH;
+
+ if (!snapshot->is_taken) {
+ ret = snapshot_take_trackers(snapshot);
+ if (!ret) {
+ ret = snapshot_check_trackers(snapshot);
+ if (!ret)
+ ret = snapshot_take_images(snapshot);
+ }
+
+ if (ret)
+ snapshot_release_trackers(snapshot);
+ } else
+ ret = -EALREADY;
+
+ snapshot_put(snapshot);
+
+ if (ret)
+ pr_err("Unable to take snapshot %pUb\n", &snapshot->id);
+ else
+ pr_info("Snapshot %pUb was taken successfully\n",
+ &snapshot->id);
+ return ret;
+}
+
+int snapshot_collect(unsigned int *pcount,
+ struct blksnap_uuid __user *id_array)
+{
+ int ret = 0;
+ int inx = 0;
+ struct snapshot *s;
+
+ pr_debug("Collect snapshots\n");
+
+ down_read(&snapshots_lock);
+ if (list_empty(&snapshots))
+ goto out;
+
+ if (!id_array) {
+ list_for_each_entry(s, &snapshots, link)
+ inx++;
+ goto out;
+ }
+
+ list_for_each_entry(s, &snapshots, link) {
+ if (inx >= *pcount) {
+ ret = -ENODATA;
+ goto out;
+ }
+
+ if (copy_to_user(id_array[inx].b, &s->id.b, sizeof(uuid_t))) {
+ pr_err("Unable to collect snapshots: failed to copy data to user buffer\n");
+ goto out;
+ }
+
+ inx++;
+ }
+out:
+ up_read(&snapshots_lock);
+ *pcount = inx;
+ return ret;
+}
+
+struct event *snapshot_wait_event(const uuid_t *id, unsigned long timeout_ms)
+{
+ struct snapshot *snapshot;
+ struct event *event;
+
+ snapshot = snapshot_get_by_id(id);
+ if (!snapshot)
+ return ERR_PTR(-ESRCH);
+
+ event = event_wait(&snapshot->diff_storage->event_queue, timeout_ms);
+
+ snapshot_put(snapshot);
+ return event;
+}
diff --git a/drivers/block/blksnap/snapshot.h b/drivers/block/blksnap/snapshot.h
new file mode 100644
index 000000000000..bfc3139aa89e
--- /dev/null
+++ b/drivers/block/blksnap/snapshot.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_SNAPSHOT_H
+#define __BLKSNAP_SNAPSHOT_H
+
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/kref.h>
+#include <linux/uuid.h>
+#include <linux/spinlock.h>
+#include <linux/rwsem.h>
+#include <linux/fs.h>
+#include "event_queue.h"
+
+struct tracker;
+struct diff_storage;
+/**
+ * struct snapshot - Snapshot structure.
+ * @link:
+ * The list header allows to store snapshots in a linked list.
+ * @kref:
+ * Protects the structure from being released during the processing of
+ * an ioctl.
+ * @id:
+ * UUID of snapshot.
+ * @rw_lock:
+ * Protects the structure from being modified by different threads.
+ * @is_taken:
+ * Flag that the snapshot was taken.
+ * @diff_storage:
+ * A pointer to the difference storage of this snapshot.
+ * @trackers:
+ * List of block device trackers.
+ *
+ * A snapshot corresponds to a single backup session and provides snapshot
+ * images for multiple block devices. Several backup sessions can be
+ * performed at the same time, which means that several snapshots can
+ * exist at the same time. However, the original block device can only
+ * belong to one snapshot. Creating multiple snapshots from the same block
+ * device is not allowed.
+ */
+struct snapshot {
+ struct list_head link;
+ struct kref kref;
+ uuid_t id;
+
+ struct rw_semaphore rw_lock;
+
+ bool is_taken;
+ struct diff_storage *diff_storage;
+ struct list_head trackers;
+};
+
+void __exit snapshot_done(void);
+
+int snapshot_create(uuid_t *id);
+int snapshot_destroy(const uuid_t *id);
+int snapshot_add_device(const uuid_t *id, struct tracker *tracker);
+int snapshot_append_storage(const uuid_t *id, const char *bdev_path,
+ struct blksnap_sectors __user *ranges,
+ unsigned int range_count);
+int snapshot_take(const uuid_t *id);
+int snapshot_collect(unsigned int *pcount,
+ struct blksnap_uuid __user *id_array);
+struct event *snapshot_wait_event(const uuid_t *id, unsigned long timeout_ms);
+
+#endif /* __BLKSNAP_SNAPSHOT_H */
--
2.20.1
The struct chunk describes the minimum data storage unit of the original
block device. Functions for working with these minimal blocks implement
algorithms for reading and writing blocks.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
drivers/block/blksnap/chunk.c | 454 ++++++++++++++++++++++++++++++++++
drivers/block/blksnap/chunk.h | 114 +++++++++
2 files changed, 568 insertions(+)
create mode 100644 drivers/block/blksnap/chunk.c
create mode 100644 drivers/block/blksnap/chunk.h
diff --git a/drivers/block/blksnap/chunk.c b/drivers/block/blksnap/chunk.c
new file mode 100644
index 000000000000..fe1e9b0e3323
--- /dev/null
+++ b/drivers/block/blksnap/chunk.c
@@ -0,0 +1,454 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-chunk: " fmt
+
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include "chunk.h"
+#include "diff_buffer.h"
+#include "diff_storage.h"
+#include "params.h"
+
+struct chunk_bio {
+ struct work_struct work;
+ struct list_head chunks;
+ struct bio *orig_bio;
+ struct bvec_iter orig_iter;
+ struct bio bio;
+};
+
+static struct bio_set chunk_io_bioset;
+static struct bio_set chunk_clone_bioset;
+
+static inline sector_t chunk_sector(struct chunk *chunk)
+{
+ return (sector_t)(chunk->number)
+ << (chunk->diff_area->chunk_shift - SECTOR_SHIFT);
+}
+
+void chunk_store_failed(struct chunk *chunk, int error)
+{
+ struct diff_area *diff_area = diff_area_get(chunk->diff_area);
+
+ WARN_ON_ONCE(chunk->state != CHUNK_ST_NEW &&
+ chunk->state != CHUNK_ST_IN_MEMORY);
+ chunk->state = CHUNK_ST_FAILED;
+
+ if (likely(chunk->diff_buffer)) {
+ diff_buffer_release(diff_area, chunk->diff_buffer);
+ chunk->diff_buffer = NULL;
+ }
+ diff_storage_free_region(chunk->diff_region);
+ chunk->diff_region = NULL;
+
+ chunk_up(chunk);
+ if (error)
+ diff_area_set_corrupted(diff_area, error);
+ diff_area_put(diff_area);
+};
+
+static void chunk_schedule_storing(struct chunk *chunk)
+{
+ struct diff_area *diff_area = diff_area_get(chunk->diff_area);
+ int queue_count;
+
+ WARN_ON_ONCE(chunk->state != CHUNK_ST_NEW &&
+ chunk->state != CHUNK_ST_STORED);
+ chunk->state = CHUNK_ST_IN_MEMORY;
+
+ spin_lock(&diff_area->store_queue_lock);
+ list_add_tail(&chunk->link, &diff_area->store_queue);
+ queue_count = atomic_inc_return(&diff_area->store_queue_count);
+ spin_unlock(&diff_area->store_queue_lock);
+
+ chunk_up(chunk);
+
+ /* Initiate the queue clearing process */
+ if (queue_count > get_chunk_maximum_in_queue())
+ queue_work(system_wq, &diff_area->store_queue_work);
+ diff_area_put(diff_area);
+}
+
+void chunk_copy_bio(struct chunk *chunk, struct bio *bio,
+ struct bvec_iter *iter)
+{
+ unsigned int chunk_ofs, chunk_left;
+
+ chunk_ofs = (iter->bi_sector - chunk_sector(chunk)) << SECTOR_SHIFT;
+ chunk_left = chunk->diff_buffer->size - chunk_ofs;
+ while (chunk_left && iter->bi_size) {
+ struct bio_vec bvec = bio_iter_iovec(bio, *iter);
+ unsigned int page_ofs = offset_in_page(chunk_ofs);
+ struct page *page;
+ unsigned int len;
+
+ page = chunk->diff_buffer->pages[chunk_ofs >> PAGE_SHIFT];
+ len = min3(bvec.bv_len,
+ chunk_left,
+ (unsigned int)PAGE_SIZE - page_ofs);
+
+ if (op_is_write(bio_op(bio))) {
+ /* from bio to buffer */
+ memcpy_page(page, page_ofs,
+ bvec.bv_page, bvec.bv_offset,
+ len);
+ } else {
+ /* from buffer to bio */
+ memcpy_page(bvec.bv_page, bvec.bv_offset,
+ page, page_ofs,
+ len);
+ }
+
+ chunk_ofs += len;
+ chunk_left -= len;
+ bio_advance_iter_single(bio, iter, len);
+ }
+}
+
+static void chunk_clone_endio(struct bio *bio)
+{
+ struct bio *orig_bio = bio->bi_private;
+
+ if (unlikely(bio->bi_status != BLK_STS_OK))
+ bio_io_error(orig_bio);
+ else
+ bio_endio(orig_bio);
+}
+
+static inline sector_t chunk_offset(struct chunk *chunk, struct bio *bio)
+{
+ return bio->bi_iter.bi_sector - chunk_sector(chunk);
+}
+
+static inline void chunk_limit_iter(struct chunk *chunk, struct bio *bio,
+ sector_t sector, struct bvec_iter *iter)
+{
+ sector_t chunk_ofs = chunk_offset(chunk, bio);
+
+ iter->bi_sector = sector + chunk_ofs;
+ iter->bi_size = min_t(unsigned int,
+ bio->bi_iter.bi_size,
+ (chunk->sector_count - chunk_ofs) << SECTOR_SHIFT);
+}
+
+static inline unsigned int chunk_limit(struct chunk *chunk, struct bio *bio)
+{
+ unsigned int chunk_ofs, chunk_left;
+
+ chunk_ofs = (unsigned int)chunk_offset(chunk, bio) << SECTOR_SHIFT;
+ chunk_left = chunk->diff_buffer->size - chunk_ofs;
+
+ return min(bio->bi_iter.bi_size, chunk_left);
+}
+
+struct bio *chunk_alloc_clone(struct block_device *bdev, struct bio *bio)
+{
+ return bio_alloc_clone(bdev, bio, GFP_NOIO, &chunk_clone_bioset);
+}
+
+void chunk_clone_bio(struct chunk *chunk, struct bio *bio)
+{
+ struct bio *new_bio;
+ struct block_device *bdev;
+ sector_t sector;
+
+ if (chunk->state == CHUNK_ST_STORED) {
+ bdev = chunk->diff_region->bdev;
+ sector = chunk->diff_region->sector;
+ } else {
+ bdev = chunk->diff_area->orig_bdev;
+ sector = chunk_sector(chunk);
+ }
+
+ new_bio = chunk_alloc_clone(bdev, bio);
+ WARN_ON(!new_bio);
+
+ chunk_limit_iter(chunk, bio, sector, &new_bio->bi_iter);
+ bio_set_flag(new_bio, BIO_FILTERED);
+ new_bio->bi_end_io = chunk_clone_endio;
+ new_bio->bi_private = bio;
+
+ bio_advance(bio, new_bio->bi_iter.bi_size);
+ bio_inc_remaining(bio);
+
+ submit_bio_noacct(new_bio);
+}
+
+static inline struct chunk *get_chunk_from_cbio(struct chunk_bio *cbio)
+{
+ struct chunk *chunk = list_first_entry_or_null(&cbio->chunks,
+ struct chunk, link);
+
+ if (chunk)
+ list_del_init(&chunk->link);
+ return chunk;
+}
+
+static void notify_load_and_schedule_io(struct work_struct *work)
+{
+ struct chunk_bio *cbio = container_of(work, struct chunk_bio, work);
+ struct chunk *chunk;
+
+ while ((chunk = get_chunk_from_cbio(cbio))) {
+ if (unlikely(cbio->bio.bi_status != BLK_STS_OK)) {
+ chunk_store_failed(chunk, -EIO);
+ continue;
+ }
+ if (chunk->state == CHUNK_ST_FAILED) {
+ chunk_up(chunk);
+ continue;
+ }
+
+ chunk_copy_bio(chunk, cbio->orig_bio, &cbio->orig_iter);
+ bio_endio(cbio->orig_bio);
+
+ chunk_schedule_storing(chunk);
+ }
+
+ bio_put(&cbio->bio);
+}
+
+static void notify_load_and_postpone_io(struct work_struct *work)
+{
+ struct chunk_bio *cbio = container_of(work, struct chunk_bio, work);
+ struct chunk *chunk;
+
+ while ((chunk = get_chunk_from_cbio(cbio))) {
+ if (unlikely(cbio->bio.bi_status != BLK_STS_OK)) {
+ chunk_store_failed(chunk, -EIO);
+ continue;
+ }
+ if (chunk->state == CHUNK_ST_FAILED) {
+ chunk_up(chunk);
+ continue;
+ }
+
+ chunk_schedule_storing(chunk);
+ }
+
+ /* submit the original bio fed into the tracker */
+ submit_bio_noacct_nocheck(cbio->orig_bio);
+ bio_put(&cbio->bio);
+}
+
+static void chunk_notify_store(struct work_struct *work)
+{
+ struct chunk_bio *cbio = container_of(work, struct chunk_bio, work);
+ struct chunk *chunk;
+
+ while ((chunk = get_chunk_from_cbio(cbio))) {
+ if (unlikely(cbio->bio.bi_status != BLK_STS_OK)) {
+ chunk_store_failed(chunk, -EIO);
+ continue;
+ }
+
+ WARN_ON_ONCE(chunk->state != CHUNK_ST_IN_MEMORY);
+ chunk->state = CHUNK_ST_STORED;
+
+ if (chunk->diff_buffer) {
+ diff_buffer_release(chunk->diff_area,
+ chunk->diff_buffer);
+ chunk->diff_buffer = NULL;
+ }
+ chunk_up(chunk);
+ }
+
+ bio_put(&cbio->bio);
+}
+
+static void chunk_io_endio(struct bio *bio)
+{
+ struct chunk_bio *cbio = container_of(bio, struct chunk_bio, bio);
+
+ queue_work(system_wq, &cbio->work);
+}
+
+static void chunk_submit_bio(struct bio *bio)
+{
+ bio->bi_end_io = chunk_io_endio;
+ submit_bio_noacct(bio);
+}
+
+static inline unsigned short calc_max_vecs(sector_t left)
+{
+ return bio_max_segs(round_up(left, PAGE_SECTORS) / PAGE_SECTORS);
+}
+
+void chunk_store(struct chunk *chunk)
+{
+ struct block_device *bdev = chunk->diff_region->bdev;
+ sector_t sector = chunk->diff_region->sector;
+ sector_t count = chunk->diff_region->count;
+ unsigned int page_idx = 0;
+ struct bio *bio;
+ struct chunk_bio *cbio;
+
+ bio = bio_alloc_bioset(bdev, calc_max_vecs(count),
+ REQ_OP_WRITE | REQ_SYNC | REQ_FUA, GFP_NOIO,
+ &chunk_io_bioset);
+ bio->bi_iter.bi_sector = sector;
+ bio_set_flag(bio, BIO_FILTERED);
+
+ while (count) {
+ struct bio *next;
+ sector_t portion = min_t(sector_t, count, PAGE_SECTORS);
+ unsigned int bytes = portion << SECTOR_SHIFT;
+
+ if (bio_add_page(bio, chunk->diff_buffer->pages[page_idx],
+ bytes, 0) == bytes) {
+ page_idx++;
+ count -= portion;
+ continue;
+ }
+
+ /* Create next bio */
+ next = bio_alloc_bioset(bdev, calc_max_vecs(count),
+ REQ_OP_WRITE | REQ_SYNC | REQ_FUA,
+ GFP_NOIO, &chunk_io_bioset);
+ next->bi_iter.bi_sector = bio_end_sector(bio);
+ bio_set_flag(next, BIO_FILTERED);
+ bio_chain(bio, next);
+ submit_bio_noacct(bio);
+ bio = next;
+ }
+
+ cbio = container_of(bio, struct chunk_bio, bio);
+
+ INIT_WORK(&cbio->work, chunk_notify_store);
+ INIT_LIST_HEAD(&cbio->chunks);
+ list_add_tail(&chunk->link, &cbio->chunks);
+ cbio->orig_bio = NULL;
+ chunk_submit_bio(bio);
+}
+
+static struct bio *__chunk_load(struct chunk *chunk)
+{
+ struct diff_buffer *diff_buffer;
+ unsigned int page_idx = 0;
+ struct bio *bio;
+ struct block_device *bdev;
+ sector_t sector, count;
+
+ diff_buffer = diff_buffer_take(chunk->diff_area);
+ if (IS_ERR(diff_buffer))
+ return ERR_CAST(diff_buffer);
+ chunk->diff_buffer = diff_buffer;
+
+ if (chunk->state == CHUNK_ST_STORED) {
+ bdev = chunk->diff_region->bdev;
+ sector = chunk->diff_region->sector;
+ count = chunk->diff_region->count;
+ } else {
+ bdev = chunk->diff_area->orig_bdev;
+ sector = chunk_sector(chunk);
+ count = chunk->sector_count;
+ }
+
+ bio = bio_alloc_bioset(bdev, calc_max_vecs(count),
+ REQ_OP_READ, GFP_NOIO, &chunk_io_bioset);
+ bio->bi_iter.bi_sector = sector;
+ bio_set_flag(bio, BIO_FILTERED);
+
+ while (count) {
+ struct bio *next;
+ sector_t portion = min_t(sector_t, count, PAGE_SECTORS);
+ unsigned int bytes = portion << SECTOR_SHIFT;
+
+ if (bio_add_page(bio, chunk->diff_buffer->pages[page_idx],
+ bytes, 0) == bytes) {
+ page_idx++;
+ count -= portion;
+ continue;
+ }
+
+ /* Create next bio */
+ next = bio_alloc_bioset(bdev, calc_max_vecs(count),
+ REQ_OP_READ, GFP_NOIO,
+ &chunk_io_bioset);
+ next->bi_iter.bi_sector = bio_end_sector(bio);
+ bio_set_flag(next, BIO_FILTERED);
+ bio_chain(bio, next);
+ submit_bio_noacct(bio);
+ bio = next;
+ }
+ return bio;
+}
+
+int chunk_load_and_postpone_io(struct chunk *chunk, struct bio **chunk_bio)
+{
+ struct bio *prev = *chunk_bio, *bio;
+
+ bio = __chunk_load(chunk);
+ if (IS_ERR(bio))
+ return PTR_ERR(bio);
+
+ if (prev) {
+ bio_chain(prev, bio);
+ submit_bio_noacct(prev);
+ }
+
+ *chunk_bio = bio;
+ return 0;
+}
+
+void chunk_load_and_postpone_io_finish(struct list_head *chunks,
+ struct bio *chunk_bio, struct bio *orig_bio)
+{
+ struct chunk_bio *cbio;
+
+ cbio = container_of(chunk_bio, struct chunk_bio, bio);
+ INIT_LIST_HEAD(&cbio->chunks);
+ while (!list_empty(chunks)) {
+ struct chunk *it;
+
+ it = list_first_entry(chunks, struct chunk, link);
+ list_del_init(&it->link);
+
+ list_add_tail(&it->link, &cbio->chunks);
+ }
+ INIT_WORK(&cbio->work, notify_load_and_postpone_io);
+ cbio->orig_bio = orig_bio;
+ chunk_submit_bio(chunk_bio);
+}
+
+int chunk_load_and_schedule_io(struct chunk *chunk, struct bio *orig_bio)
+{
+ struct chunk_bio *cbio;
+ struct bio *bio;
+
+ bio = __chunk_load(chunk);
+ if (IS_ERR(bio))
+ return PTR_ERR(bio);
+
+ cbio = container_of(bio, struct chunk_bio, bio);
+ INIT_LIST_HEAD(&cbio->chunks);
+ list_add_tail(&chunk->link, &cbio->chunks);
+ INIT_WORK(&cbio->work, notify_load_and_schedule_io);
+ cbio->orig_bio = orig_bio;
+ cbio->orig_iter = orig_bio->bi_iter;
+ bio_advance_iter_single(orig_bio, &orig_bio->bi_iter,
+ chunk_limit(chunk, orig_bio));
+ bio_inc_remaining(orig_bio);
+
+ chunk_submit_bio(bio);
+ return 0;
+}
+
+int __init chunk_init(void)
+{
+ int ret;
+
+ ret = bioset_init(&chunk_io_bioset, 64,
+ offsetof(struct chunk_bio, bio),
+ BIOSET_NEED_BVECS | BIOSET_NEED_RESCUER);
+ if (!ret)
+ ret = bioset_init(&chunk_clone_bioset, 64, 0,
+ BIOSET_NEED_BVECS | BIOSET_NEED_RESCUER);
+ return ret;
+}
+
+void chunk_done(void)
+{
+ bioset_exit(&chunk_io_bioset);
+ bioset_exit(&chunk_clone_bioset);
+}
diff --git a/drivers/block/blksnap/chunk.h b/drivers/block/blksnap/chunk.h
new file mode 100644
index 000000000000..cd119ac729df
--- /dev/null
+++ b/drivers/block/blksnap/chunk.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_CHUNK_H
+#define __BLKSNAP_CHUNK_H
+
+#include <linux/blk_types.h>
+#include <linux/blkdev.h>
+#include <linux/rwsem.h>
+#include <linux/atomic.h>
+#include "diff_area.h"
+
+struct diff_area;
+struct diff_region;
+
+/**
+ * enum chunk_st - Possible states for a chunk.
+ *
+ * @CHUNK_ST_NEW:
+ * No data is associated with the chunk.
+ * @CHUNK_ST_IN_MEMORY:
+ * The data of the chunk is ready to be read from the RAM buffer.
+ * The flag is removed when a chunk is removed from the store queue
+ * and its buffer is released.
+ * @CHUNK_ST_STORED:
+ * The data of the chunk has been written to the difference storage.
+ * @CHUNK_ST_FAILED:
+ * An error occurred while processing the chunk data.
+ *
+ * Chunks life circle:
+ * CHUNK_ST_NEW -> CHUNK_ST_IN_MEMORY <-> CHUNK_ST_STORED
+ */
+
+enum chunk_st {
+ CHUNK_ST_NEW,
+ CHUNK_ST_IN_MEMORY,
+ CHUNK_ST_STORED,
+ CHUNK_ST_FAILED,
+};
+
+/**
+ * struct chunk - Minimum data storage unit.
+ *
+ * @link:
+ * The list header allows to create queue of chunks.
+ * @number:
+ * Sequential number of the chunk.
+ * @sector_count:
+ * Number of sectors in the current chunk. This is especially true
+ * for the last chunk.
+ * @lock:
+ * Binary semaphore. Syncs access to the chunks fields: state,
+ * diff_buffer and diff_region.
+ * @diff_area:
+ * Pointer to the difference area - the difference storage area for a
+ * specific device. This field is only available when the chunk is locked.
+ * Allows to protect the difference area from early release.
+ * @state:
+ * Defines the state of a chunk.
+ * @diff_buffer:
+ * Pointer to &struct diff_buffer. Describes a buffer in the memory
+ * for storing the chunk data.
+ * @diff_region:
+ * Pointer to &struct diff_region. Describes a copy of the chunk data
+ * on the difference storage.
+ *
+ * This structure describes the block of data that the module operates
+ * with when executing the copy-on-write algorithm and when performing I/O
+ * to snapshot images.
+ *
+ * If the data of the chunk has been changed or has just been read, then
+ * the chunk gets into store queue.
+ *
+ * The semaphore is blocked for writing if there is no actual data in the
+ * buffer, since a block of data is being read from the original device or
+ * from a diff storage. If data is being read from or written to the
+ * diff_buffer, the semaphore must be locked.
+ */
+struct chunk {
+ struct list_head link;
+ unsigned long number;
+ sector_t sector_count;
+
+ struct semaphore lock;
+ struct diff_area *diff_area;
+
+ enum chunk_st state;
+ struct diff_buffer *diff_buffer;
+ struct diff_region *diff_region;
+};
+
+static inline void chunk_up(struct chunk *chunk)
+{
+ struct diff_area *diff_area = chunk->diff_area;
+
+ chunk->diff_area = NULL;
+ up(&chunk->lock);
+ diff_area_put(diff_area);
+};
+
+void chunk_store_failed(struct chunk *chunk, int error);
+struct bio *chunk_alloc_clone(struct block_device *bdev, struct bio *bio);
+
+void chunk_copy_bio(struct chunk *chunk, struct bio *bio,
+ struct bvec_iter *iter);
+void chunk_clone_bio(struct chunk *chunk, struct bio *bio);
+void chunk_store(struct chunk *chunk);
+int chunk_load_and_schedule_io(struct chunk *chunk, struct bio *orig_bio);
+int chunk_load_and_postpone_io(struct chunk *chunk, struct bio **chunk_bio);
+void chunk_load_and_postpone_io_finish(struct list_head *chunks,
+ struct bio *chunk_bio, struct bio *orig_bio);
+
+int __init chunk_init(void);
+void chunk_done(void);
+#endif /* __BLKSNAP_CHUNK_H */
--
2.20.1
The document contains:
* Describes the purpose of the mechanism
* A little historical background on the capabilities of handling I/O
units of the Linux kernel
* Brief description of the design
* Reference to interface description
Reviewed-by: Bagas Sanjaya <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
Documentation/block/blkfilter.rst | 64 +++++++++++++++++++++++++++++++
Documentation/block/index.rst | 1 +
MAINTAINERS | 6 +++
3 files changed, 71 insertions(+)
create mode 100644 Documentation/block/blkfilter.rst
diff --git a/Documentation/block/blkfilter.rst b/Documentation/block/blkfilter.rst
new file mode 100644
index 000000000000..555625789244
--- /dev/null
+++ b/Documentation/block/blkfilter.rst
@@ -0,0 +1,64 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================================
+Block Device Filtering Mechanism
+================================
+
+The block device filtering mechanism is an API that allows to attach block
+device filters. Block device filters allow perform additional processing
+for I/O units.
+
+Introduction
+============
+
+The idea of handling I/O units on block devices is not new. Back in the
+2.6 kernel, there was an undocumented possibility of handling I/O units
+by substituting the make_request_fn() function, which belonged to the
+request_queue structure. But none of the in-tree kernel modules used this
+feature, and it was eliminated in the 5.10 kernel.
+
+The block device filtering mechanism returns the ability to handle I/O units.
+It is possible to safely attach filter to a block device "on the fly" without
+changing the structure of block devices stack.
+
+It supports attaching one filter to one block device, because there is only
+one filter implementation in the kernel yet.
+See Documentation/block/blksnap.rst.
+
+Design
+======
+
+The block device filtering mechanism provides registration and unregistration
+for filter operations. The struct blkfilter_operations contains a pointer to
+the callback functions for the filter. After registering the filter operations,
+filter can be managed using block device ioctl BLKFILTER_ATTACH,
+BLKFILTER_DETACH and BLKFILTER_CTL.
+
+When the filter is attached, the callback function is called for each I/O unit
+for a block device, providing I/O unit filtering. Depending on the result of
+filtering the I/O unit, it can either be passed for subsequent processing by
+the block layer, or skipped.
+
+The filter can be implemented as a loadable module. In this case, the filter
+module cannot be unloaded while the filter is attached to at least one of the
+block devices.
+
+Interface description
+=====================
+
+The ioctl BLKFILTER_ATTACH and BLKFILTER_DETACH use structure blkfilter_name.
+It allows to attach a filter to a block device or detach it.
+
+The ioctl BLKFILTER_CTL use structure blkfilter_ctl. It allows to send a
+filter-specific command.
+
+.. kernel-doc:: include/uapi/linux/blk-filter.h
+
+To register in the system, the filter creates its own account, which contains
+callback functions, unique filter name and module owner. This filter account is
+used by the registration functions.
+
+.. kernel-doc:: include/linux/blk-filter.h
+
+.. kernel-doc:: block/blk-filter.c
+ :export:
diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst
index 9fea696f9daa..e9712f72cd6d 100644
--- a/Documentation/block/index.rst
+++ b/Documentation/block/index.rst
@@ -10,6 +10,7 @@ Block
bfq-iosched
biovecs
blk-mq
+ blkfilter
cmdline-partition
data-integrity
deadline-iosched
diff --git a/MAINTAINERS b/MAINTAINERS
index e0ad886d3163..d801b8985b43 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3580,6 +3580,12 @@ M: Jan-Simon Moeller <[email protected]>
S: Maintained
F: drivers/leds/leds-blinkm.c
+BLOCK DEVICE FILTERING MECHANISM
+M: Sergei Shtepa <[email protected]>
+L: [email protected]
+S: Supported
+F: Documentation/block/blkfilter.rst
+
BLOCK LAYER
M: Jens Axboe <[email protected]>
L: [email protected]
--
2.20.1
The header file contains a set of declarations, structures and control
requests (ioctl) that allows to manage the module from the user space.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Donald Buczek <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
MAINTAINERS | 1 +
include/uapi/linux/blksnap.h | 421 +++++++++++++++++++++++++++++++++++
2 files changed, 422 insertions(+)
create mode 100644 include/uapi/linux/blksnap.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c7dabe785cf1..76b14ad604dc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3594,6 +3594,7 @@ M: Sergei Shtepa <[email protected]>
L: [email protected]
S: Supported
F: Documentation/block/blksnap.rst
+F: include/uapi/linux/blksnap.h
BLOCK LAYER
M: Jens Axboe <[email protected]>
diff --git a/include/uapi/linux/blksnap.h b/include/uapi/linux/blksnap.h
new file mode 100644
index 000000000000..2fe3f2a43bc5
--- /dev/null
+++ b/include/uapi/linux/blksnap.h
@@ -0,0 +1,421 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef _UAPI_LINUX_BLKSNAP_H
+#define _UAPI_LINUX_BLKSNAP_H
+
+#include <linux/types.h>
+
+#define BLKSNAP_CTL "blksnap-control"
+#define BLKSNAP_IMAGE_NAME "blksnap-image"
+#define BLKSNAP 'V'
+
+/**
+ * DOC: Block device filter interface.
+ *
+ * Control commands that are transmitted through the block device filter
+ * interface.
+ */
+
+/**
+ * enum blkfilter_ctl_blksnap - List of commands for BLKFILTER_CTL ioctl
+ *
+ * @blkfilter_ctl_blksnap_cbtinfo:
+ * Get CBT information.
+ * The result of executing the command is a &struct blksnap_cbtinfo.
+ * Return 0 if succeeded, negative errno otherwise.
+ * @blkfilter_ctl_blksnap_cbtmap:
+ * Read the CBT map.
+ * The option passes the &struct blksnap_cbtmap.
+ * The size of the table can be quite large. Thus, the table is read in
+ * a loop, in each cycle of which the next offset is set to
+ * &blksnap_tracker_read_cbt_bitmap.offset.
+ * Return a count of bytes read if succeeded, negative errno otherwise.
+ * @blkfilter_ctl_blksnap_cbtdirty:
+ * Set dirty blocks in the CBT map.
+ * The option passes the &struct blksnap_cbtdirty.
+ * There are cases when some blocks need to be marked as changed.
+ * This ioctl allows to do this.
+ * Return 0 if succeeded, negative errno otherwise.
+ * @blkfilter_ctl_blksnap_snapshotadd:
+ * Add device to snapshot.
+ * The option passes the &struct blksnap_snapshotadd.
+ * Return 0 if succeeded, negative errno otherwise.
+ * @blkfilter_ctl_blksnap_snapshotinfo:
+ * Get information about snapshot.
+ * The result of executing the command is a &struct blksnap_snapshotinfo.
+ * Return 0 if succeeded, negative errno otherwise.
+ */
+enum blkfilter_ctl_blksnap {
+ blkfilter_ctl_blksnap_cbtinfo,
+ blkfilter_ctl_blksnap_cbtmap,
+ blkfilter_ctl_blksnap_cbtdirty,
+ blkfilter_ctl_blksnap_snapshotadd,
+ blkfilter_ctl_blksnap_snapshotinfo,
+};
+
+#ifndef UUID_SIZE
+#define UUID_SIZE 16
+#endif
+
+/**
+ * struct blksnap_uuid - Unique 16-byte identifier.
+ *
+ * @b:
+ * An array of 16 bytes.
+ */
+struct blksnap_uuid {
+ __u8 b[UUID_SIZE];
+};
+
+/**
+ * struct blksnap_cbtinfo - Result for the command
+ * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_cbtinfo.
+ *
+ * @device_capacity:
+ * Device capacity in bytes.
+ * @block_size:
+ * Block size in bytes.
+ * @block_count:
+ * Number of blocks.
+ * @generation_id:
+ * Unique identifier of change tracking generation.
+ * @changes_number:
+ * Current changes number.
+ */
+struct blksnap_cbtinfo {
+ __u64 device_capacity;
+ __u32 block_size;
+ __u32 block_count;
+ struct blksnap_uuid generation_id;
+ __u8 changes_number;
+};
+
+/**
+ * struct blksnap_cbtmap - Option for the command
+ * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_cbtmap.
+ *
+ * @offset:
+ * Offset from the beginning of the CBT bitmap in bytes.
+ * @length:
+ * Size of @buffer in bytes.
+ * @buffer:
+ * Pointer to the buffer for output.
+ */
+struct blksnap_cbtmap {
+ __u32 offset;
+ __u32 length;
+ __u64 buffer;
+};
+
+/**
+ * struct blksnap_sectors - Description of the block device region.
+ *
+ * @offset:
+ * Offset from the beginning of the disk in sectors.
+ * @count:
+ * Count of sectors.
+ */
+struct blksnap_sectors {
+ __u64 offset;
+ __u64 count;
+};
+
+/**
+ * struct blksnap_cbtdirty - Option for the command
+ * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_cbtdirty.
+ *
+ * @count:
+ * Count of elements in the @dirty_sectors.
+ * @dirty_sectors:
+ * Pointer to the array of &struct blksnap_sectors.
+ */
+struct blksnap_cbtdirty {
+ __u32 count;
+ __u64 dirty_sectors;
+};
+
+/**
+ * struct blksnap_snapshotadd - Option for the command
+ * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_snapshotadd.
+ *
+ * @id:
+ * ID of the snapshot to which the block device should be added.
+ */
+struct blksnap_snapshotadd {
+ struct blksnap_uuid id;
+};
+
+#define IMAGE_DISK_NAME_LEN 32
+
+/**
+ * struct blksnap_snapshotinfo - Result for the command
+ * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_snapshotinfo.
+ *
+ * @error_code:
+ * Zero if there were no errors while holding the snapshot.
+ * The error code -ENOSPC means that while holding the snapshot, a snapshot
+ * overflow situation has occurred. Other error codes mean other reasons
+ * for failure.
+ * The error code is reset when the device is added to a new snapshot.
+ * @image:
+ * If the snapshot was taken, it stores the block device name of the
+ * image, or empty string otherwise.
+ */
+struct blksnap_snapshotinfo {
+ __s32 error_code;
+ __u8 image[IMAGE_DISK_NAME_LEN];
+};
+
+/**
+ * DOC: Interface for managing snapshots
+ *
+ * Control commands that are transmitted through the blksnap module interface.
+ */
+enum blksnap_ioctl {
+ blksnap_ioctl_version,
+ blksnap_ioctl_snapshot_create,
+ blksnap_ioctl_snapshot_destroy,
+ blksnap_ioctl_snapshot_append_storage,
+ blksnap_ioctl_snapshot_take,
+ blksnap_ioctl_snapshot_collect,
+ blksnap_ioctl_snapshot_wait_event,
+};
+
+/**
+ * struct blksnap_version - Module version.
+ *
+ * @major:
+ * Version major part.
+ * @minor:
+ * Version minor part.
+ * @revision:
+ * Revision number.
+ * @build:
+ * Build number. Should be zero.
+ */
+struct blksnap_version {
+ __u16 major;
+ __u16 minor;
+ __u16 revision;
+ __u16 build;
+};
+
+/**
+ * define IOCTL_BLKSNAP_VERSION - Get module version.
+ *
+ * The version may increase when the API changes. But linking the user space
+ * behavior to the version code does not seem to be a good idea.
+ * To ensure backward compatibility, API changes should be made by adding new
+ * ioctl without changing the behavior of existing ones. The version should be
+ * used for logs.
+ *
+ * Return: 0 if succeeded, negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_VERSION \
+ _IOW(BLKSNAP, blksnap_ioctl_version, struct blksnap_version)
+
+
+/**
+ * define IOCTL_BLKSNAP_SNAPSHOT_CREATE - Create snapshot.
+ *
+ * Creates a snapshot structure in the memory and allocates an identifier for
+ * it. Further interaction with the snapshot is possible by this identifier.
+ * A snapshot is created for several block devices at once.
+ * Several snapshots can be created at the same time, but with the condition
+ * that one block device can only be included in one snapshot.
+ *
+ * Return: 0 if succeeded, negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_SNAPSHOT_CREATE \
+ _IOW(BLKSNAP, blksnap_ioctl_snapshot_create, \
+ struct blksnap_uuid)
+
+
+/**
+ * define IOCTL_BLKSNAP_SNAPSHOT_DESTROY - Release and destroy the snapshot.
+ *
+ * Destroys snapshot with &blksnap_snapshot_destroy.id. This leads to the
+ * deletion of all block device images of the snapshot. The difference storage
+ * is being released. But the change tracker keeps tracking.
+ *
+ * Return: 0 if succeeded, negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_SNAPSHOT_DESTROY \
+ _IOR(BLKSNAP, blksnap_ioctl_snapshot_destroy, \
+ struct blksnap_uuid)
+
+/**
+ * struct blksnap_snapshot_append_storage - Argument for the
+ * &IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE control.
+ *
+ * @id:
+ * Snapshot ID.
+ * @bdev_path:
+ * Device path string buffer.
+ * @bdev_path_size:
+ * Device path string buffer size.
+ * @count:
+ * Size of @ranges in the number of &struct blksnap_sectors.
+ * @ranges:
+ * Pointer to the array of &struct blksnap_sectors.
+ */
+struct blksnap_snapshot_append_storage {
+ struct blksnap_uuid id;
+ __u64 bdev_path;
+ __u32 bdev_path_size;
+ __u32 count;
+ __u64 ranges;
+};
+
+/**
+ * define IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE - Append storage to the
+ * difference storage of the snapshot.
+ *
+ * The snapshot difference storage can be set either before or after creating
+ * the snapshot images. This allows to dynamically expand the difference
+ * storage while holding the snapshot.
+ *
+ * Return: 0 if succeeded, negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE \
+ _IOW(BLKSNAP, blksnap_ioctl_snapshot_append_storage, \
+ struct blksnap_snapshot_append_storage)
+
+/**
+ * define IOCTL_BLKSNAP_SNAPSHOT_TAKE - Take snapshot.
+ *
+ * Creates snapshot images of block devices and switches change trackers tables.
+ * The snapshot must be created before this call, and the areas of block
+ * devices should be added to the difference storage.
+ *
+ * Return: 0 if succeeded, negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_SNAPSHOT_TAKE \
+ _IOR(BLKSNAP, blksnap_ioctl_snapshot_take, \
+ struct blksnap_uuid)
+
+/**
+ * struct blksnap_snapshot_collect - Argument for the
+ * &IOCTL_BLKSNAP_SNAPSHOT_COLLECT control.
+ *
+ * @count:
+ * Size of &blksnap_snapshot_collect.ids in the number of 16-byte UUID.
+ * @ids:
+ * Pointer to the array of struct blksnap_uuid for output.
+ */
+struct blksnap_snapshot_collect {
+ __u32 count;
+ __u64 ids;
+};
+
+/**
+ * define IOCTL_BLKSNAP_SNAPSHOT_COLLECT - Get collection of created snapshots.
+ *
+ * Multiple snapshots can be created at the same time. This allows for one
+ * system to create backups for different data with a independent schedules.
+ *
+ * If in &blksnap_snapshot_collect.count is less than required to store the
+ * &blksnap_snapshot_collect.ids, the array is not filled, and the ioctl
+ * returns the required count for &blksnap_snapshot_collect.ids.
+ *
+ * So, it is recommended to call the ioctl twice. The first call with an null
+ * pointer &blksnap_snapshot_collect.ids and a zero value in
+ * &blksnap_snapshot_collect.count. It will set the required array size in
+ * &blksnap_snapshot_collect.count. The second call with a pointer
+ * &blksnap_snapshot_collect.ids to an array of the required size will allow to
+ * get collection of active snapshots.
+ *
+ * Return: 0 if succeeded, -ENODATA if there is not enough space in the array
+ * to store collection of active snapshots, or negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_SNAPSHOT_COLLECT \
+ _IOW(BLKSNAP, blksnap_ioctl_snapshot_collect, \
+ struct blksnap_snapshot_collect)
+
+/**
+ * enum blksnap_event_codes - Variants of event codes.
+ *
+ * @blksnap_event_code_low_free_space:
+ * Low free space in difference storage event.
+ * If the free space in the difference storage is reduced to the specified
+ * limit, the module generates this event.
+ * @blksnap_event_code_corrupted:
+ * Snapshot image is corrupted event.
+ * If a chunk could not be allocated when trying to save data to the
+ * difference storage, this event is generated. However, this does not mean
+ * that the backup process was interrupted with an error. If the snapshot
+ * image has been read to the end by this time, the backup process is
+ * considered successful.
+ */
+enum blksnap_event_codes {
+ blksnap_event_code_low_free_space,
+ blksnap_event_code_corrupted,
+};
+
+/**
+ * struct blksnap_snapshot_event - Argument for the
+ * &IOCTL_BLKSNAP_SNAPSHOT_WAIT_EVENT control.
+ *
+ * @id:
+ * Snapshot ID.
+ * @timeout_ms:
+ * Timeout for waiting in milliseconds.
+ * @code:
+ * Code of the received event &enum blksnap_event_codes.
+ * @time_label:
+ * Timestamp of the received event.
+ * @data:
+ * The received event body.
+ */
+struct blksnap_snapshot_event {
+ struct blksnap_uuid id;
+ __u32 timeout_ms;
+ __u32 code;
+ __s64 time_label;
+ __u8 data[4096 - 32];
+};
+
+/**
+ * define IOCTL_BLKSNAP_SNAPSHOT_WAIT_EVENT - Wait and get the event from the
+ * snapshot.
+ *
+ * While holding the snapshot, the kernel module can transmit information about
+ * changes in its state in the form of events to the user level.
+ * It is very important to receive these events as quickly as possible, so the
+ * user's thread is in the state of interruptable sleep.
+ *
+ * Return: 0 if succeeded, negative errno otherwise.
+ */
+#define IOCTL_BLKSNAP_SNAPSHOT_WAIT_EVENT \
+ _IOW(BLKSNAP, blksnap_ioctl_snapshot_wait_event, \
+ struct blksnap_snapshot_event)
+
+/**
+ * struct blksnap_event_low_free_space - Data for the
+ * &blksnap_event_code_low_free_space event.
+ *
+ * @requested_nr_sect:
+ * The required number of sectors.
+ */
+struct blksnap_event_low_free_space {
+ __u64 requested_nr_sect;
+};
+
+/**
+ * struct blksnap_event_corrupted - Data for the
+ * &blksnap_event_code_corrupted event.
+ *
+ * @dev_id_mj:
+ * Major part of original device ID.
+ * @dev_id_mn:
+ * Minor part of original device ID.
+ * @err_code:
+ * Error code.
+ */
+struct blksnap_event_corrupted {
+ __u32 dev_id_mj;
+ __u32 dev_id_mn;
+ __s32 err_code;
+};
+
+#endif /* _UAPI_LINUX_BLKSNAP_H */
--
2.20.1
The struct tracker contains callback functions for handling a I/O units
of a block device. When a write request is handled, the change block
tracking (CBT) map functions are called and initiates the process of
copying data from the original block device to the change store.
Registering and unregistering the tracker is provided by the functions
blkfilter_register() and blkfilter_unregister().
The struct cbt_map allows to store the history of block device changes.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
drivers/block/blksnap/cbt_map.c | 227 +++++++++++++++++++++
drivers/block/blksnap/cbt_map.h | 90 +++++++++
drivers/block/blksnap/tracker.c | 339 ++++++++++++++++++++++++++++++++
drivers/block/blksnap/tracker.h | 75 +++++++
4 files changed, 731 insertions(+)
create mode 100644 drivers/block/blksnap/cbt_map.c
create mode 100644 drivers/block/blksnap/cbt_map.h
create mode 100644 drivers/block/blksnap/tracker.c
create mode 100644 drivers/block/blksnap/tracker.h
diff --git a/drivers/block/blksnap/cbt_map.c b/drivers/block/blksnap/cbt_map.c
new file mode 100644
index 000000000000..a0aeef8c2e94
--- /dev/null
+++ b/drivers/block/blksnap/cbt_map.c
@@ -0,0 +1,227 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-cbt_map: " fmt
+
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <uapi/linux/blksnap.h>
+#include "cbt_map.h"
+#include "params.h"
+
+static inline unsigned long long count_by_shift(sector_t capacity,
+ unsigned long long shift)
+{
+ sector_t blk_size = 1ull << (shift - SECTOR_SHIFT);
+
+ return round_up(capacity, blk_size) / blk_size;
+}
+
+static void cbt_map_calculate_block_size(struct cbt_map *cbt_map)
+{
+ unsigned long long count;
+ unsigned long long shift = get_tracking_block_minimum_shift();
+
+ pr_debug("Device capacity %llu sectors\n", cbt_map->device_capacity);
+ /*
+ * The size of the tracking block is calculated based on the size of the disk
+ * so that the CBT table does not exceed a reasonable size.
+ */
+ count = count_by_shift(cbt_map->device_capacity, shift);
+ pr_debug("Blocks count %llu\n", count);
+ while (count > get_tracking_block_maximum_count()) {
+ if (shift >= get_tracking_block_maximum_shift()) {
+ pr_info("The maximum allowable CBT block size has been reached.\n");
+ break;
+ }
+ shift = shift + 1ull;
+ count = count_by_shift(cbt_map->device_capacity, shift);
+ pr_debug("Blocks count %llu\n", count);
+ }
+
+ cbt_map->blk_size_shift = shift;
+ cbt_map->blk_count = count;
+ pr_debug("The optimal CBT block size was calculated as %llu bytes\n",
+ (1ull << cbt_map->blk_size_shift));
+}
+
+static int cbt_map_allocate(struct cbt_map *cbt_map)
+{
+ unsigned char *read_map = NULL;
+ unsigned char *write_map = NULL;
+ size_t size = cbt_map->blk_count;
+
+ pr_debug("Allocate CBT map of %zu blocks\n", size);
+
+ if (cbt_map->read_map || cbt_map->write_map)
+ return -EINVAL;
+
+ read_map = __vmalloc(size, GFP_NOIO | __GFP_ZERO);
+ if (!read_map)
+ return -ENOMEM;
+
+ write_map = __vmalloc(size, GFP_NOIO | __GFP_ZERO);
+ if (!write_map) {
+ vfree(read_map);
+ return -ENOMEM;
+ }
+
+ cbt_map->read_map = read_map;
+ cbt_map->write_map = write_map;
+
+ cbt_map->snap_number_previous = 0;
+ cbt_map->snap_number_active = 1;
+ generate_random_uuid(cbt_map->generation_id.b);
+ cbt_map->is_corrupted = false;
+
+ return 0;
+}
+
+static void cbt_map_deallocate(struct cbt_map *cbt_map)
+{
+ cbt_map->is_corrupted = false;
+
+ if (cbt_map->read_map) {
+ vfree(cbt_map->read_map);
+ cbt_map->read_map = NULL;
+ }
+
+ if (cbt_map->write_map) {
+ vfree(cbt_map->write_map);
+ cbt_map->write_map = NULL;
+ }
+}
+
+int cbt_map_reset(struct cbt_map *cbt_map, sector_t device_capacity)
+{
+ cbt_map_deallocate(cbt_map);
+
+ cbt_map->device_capacity = device_capacity;
+ cbt_map_calculate_block_size(cbt_map);
+
+ return cbt_map_allocate(cbt_map);
+}
+
+void cbt_map_destroy(struct cbt_map *cbt_map)
+{
+ pr_debug("CBT map destroy\n");
+
+ cbt_map_deallocate(cbt_map);
+ kfree(cbt_map);
+}
+
+struct cbt_map *cbt_map_create(struct block_device *bdev)
+{
+ struct cbt_map *cbt_map = NULL;
+ int ret;
+
+ pr_debug("CBT map create\n");
+
+ cbt_map = kzalloc(sizeof(struct cbt_map), GFP_KERNEL);
+ if (cbt_map == NULL)
+ return NULL;
+
+ cbt_map->device_capacity = bdev_nr_sectors(bdev);
+ cbt_map_calculate_block_size(cbt_map);
+
+ ret = cbt_map_allocate(cbt_map);
+ if (ret) {
+ pr_err("Failed to create tracker. errno=%d\n", abs(ret));
+ cbt_map_destroy(cbt_map);
+ return NULL;
+ }
+
+ spin_lock_init(&cbt_map->locker);
+ cbt_map->is_corrupted = false;
+
+ return cbt_map;
+}
+
+void cbt_map_switch(struct cbt_map *cbt_map)
+{
+ pr_debug("CBT map switch\n");
+ spin_lock(&cbt_map->locker);
+
+ cbt_map->snap_number_previous = cbt_map->snap_number_active;
+ ++cbt_map->snap_number_active;
+ if (cbt_map->snap_number_active == 256) {
+ cbt_map->snap_number_active = 1;
+
+ memset(cbt_map->write_map, 0, cbt_map->blk_count);
+
+ generate_random_uuid(cbt_map->generation_id.b);
+
+ pr_debug("CBT reset\n");
+ } else
+ memcpy(cbt_map->read_map, cbt_map->write_map, cbt_map->blk_count);
+ spin_unlock(&cbt_map->locker);
+}
+
+static inline int _cbt_map_set(struct cbt_map *cbt_map, sector_t sector_start,
+ sector_t sector_cnt, u8 snap_number,
+ unsigned char *map)
+{
+ int res = 0;
+ u8 num;
+ size_t inx;
+ size_t cbt_block_first = (size_t)(
+ sector_start >> (cbt_map->blk_size_shift - SECTOR_SHIFT));
+ size_t cbt_block_last = (size_t)(
+ (sector_start + sector_cnt - 1) >>
+ (cbt_map->blk_size_shift - SECTOR_SHIFT));
+
+ for (inx = cbt_block_first; inx <= cbt_block_last; ++inx) {
+ if (unlikely(inx >= cbt_map->blk_count)) {
+ pr_err("Block index is too large\n");
+ pr_err("Block #%zu was demanded, map size %zu blocks\n",
+ inx, cbt_map->blk_count);
+ res = -EINVAL;
+ break;
+ }
+
+ num = map[inx];
+ if (num < snap_number)
+ map[inx] = snap_number;
+ }
+ return res;
+}
+
+int cbt_map_set(struct cbt_map *cbt_map, sector_t sector_start,
+ sector_t sector_cnt)
+{
+ int res;
+
+ spin_lock(&cbt_map->locker);
+ if (unlikely(cbt_map->is_corrupted)) {
+ spin_unlock(&cbt_map->locker);
+ return -EINVAL;
+ }
+ res = _cbt_map_set(cbt_map, sector_start, sector_cnt,
+ (u8)cbt_map->snap_number_active, cbt_map->write_map);
+ if (unlikely(res))
+ cbt_map->is_corrupted = true;
+
+ spin_unlock(&cbt_map->locker);
+
+ return res;
+}
+
+int cbt_map_set_both(struct cbt_map *cbt_map, sector_t sector_start,
+ sector_t sector_cnt)
+{
+ int res;
+
+ spin_lock(&cbt_map->locker);
+ if (unlikely(cbt_map->is_corrupted)) {
+ spin_unlock(&cbt_map->locker);
+ return -EINVAL;
+ }
+ res = _cbt_map_set(cbt_map, sector_start, sector_cnt,
+ (u8)cbt_map->snap_number_active, cbt_map->write_map);
+ if (!res)
+ res = _cbt_map_set(cbt_map, sector_start, sector_cnt,
+ (u8)cbt_map->snap_number_previous,
+ cbt_map->read_map);
+ spin_unlock(&cbt_map->locker);
+
+ return res;
+}
diff --git a/drivers/block/blksnap/cbt_map.h b/drivers/block/blksnap/cbt_map.h
new file mode 100644
index 000000000000..f87bffd5b3a7
--- /dev/null
+++ b/drivers/block/blksnap/cbt_map.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_CBT_MAP_H
+#define __BLKSNAP_CBT_MAP_H
+
+#include <linux/kernel.h>
+#include <linux/kref.h>
+#include <linux/uuid.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+
+struct blksnap_sectors;
+
+/**
+ * struct cbt_map - The table of changes for a block device.
+ *
+ * @locker:
+ * Locking for atomic modification of structure members.
+ * @blk_size_shift:
+ * The power of 2 used to specify the change tracking block size.
+ * @blk_count:
+ * The number of change tracking blocks.
+ * @device_capacity:
+ * The actual capacity of the device.
+ * @read_map:
+ * A table of changes available for reading. This is the table that can
+ * be read after taking a snapshot.
+ * @write_map:
+ * The current table for tracking changes.
+ * @snap_number_active:
+ * The current sequential number of changes. This is the number that is written to
+ * the current table when the block data changes.
+ * @snap_number_previous:
+ * The previous sequential number of changes. This number is used to identify the
+ * blocks that were changed between the penultimate snapshot and the last snapshot.
+ * @generation_id:
+ * UUID of the generation of changes.
+ * @is_corrupted:
+ * A flag that the change tracking data is no longer reliable.
+ *
+ * The change block tracking map is a byte table. Each byte stores the
+ * sequential number of changes for one block. To determine which blocks have changed
+ * since the previous snapshot with the change number 4, it is enough to
+ * find all bytes with the number more than 4.
+ *
+ * Since one byte is allocated to track changes in one block, the change
+ * table is created again at the 255th snapshot. At the same time, a new
+ * unique generation identifier is generated. Tracking changes is
+ * possible only for tables of the same generation.
+ *
+ * There are two tables on the change block tracking map. One is
+ * available for reading, and the other is available for writing. At the moment of taking
+ * a snapshot, the tables are synchronized. The user's process, when
+ * calling the corresponding ioctl, can read the readable table.
+ * At the same time, the change tracking mechanism continues to work with
+ * the writable table.
+ *
+ * To provide the ability to mount a snapshot image as writeable, it is
+ * possible to make changes to both of these tables simultaneously.
+ *
+ */
+struct cbt_map {
+ spinlock_t locker;
+
+ size_t blk_size_shift;
+ size_t blk_count;
+ sector_t device_capacity;
+
+ unsigned char *read_map;
+ unsigned char *write_map;
+
+ unsigned long snap_number_active;
+ unsigned long snap_number_previous;
+ uuid_t generation_id;
+
+ bool is_corrupted;
+};
+
+struct cbt_map *cbt_map_create(struct block_device *bdev);
+int cbt_map_reset(struct cbt_map *cbt_map, sector_t device_capacity);
+
+void cbt_map_destroy(struct cbt_map *cbt_map);
+
+void cbt_map_switch(struct cbt_map *cbt_map);
+int cbt_map_set(struct cbt_map *cbt_map, sector_t sector_start,
+ sector_t sector_cnt);
+int cbt_map_set_both(struct cbt_map *cbt_map, sector_t sector_start,
+ sector_t sector_cnt);
+
+#endif /* __BLKSNAP_CBT_MAP_H */
diff --git a/drivers/block/blksnap/tracker.c b/drivers/block/blksnap/tracker.c
new file mode 100644
index 000000000000..da6539fb6f54
--- /dev/null
+++ b/drivers/block/blksnap/tracker.c
@@ -0,0 +1,339 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME "-tracker: " fmt
+
+#include <linux/slab.h>
+#include <linux/blk-mq.h>
+#include <linux/sched/mm.h>
+#include <linux/build_bug.h>
+#include <uapi/linux/blksnap.h>
+#include "tracker.h"
+#include "cbt_map.h"
+#include "diff_area.h"
+#include "snapimage.h"
+#include "snapshot.h"
+
+void tracker_free(struct kref *kref)
+{
+ struct tracker *tracker = container_of(kref, struct tracker, kref);
+
+ might_sleep();
+
+ pr_debug("Free tracker for device [%u:%u]\n", MAJOR(tracker->dev_id),
+ MINOR(tracker->dev_id));
+
+ if (tracker->diff_area)
+ diff_area_put(tracker->diff_area);
+ if (tracker->cbt_map)
+ cbt_map_destroy(tracker->cbt_map);
+
+ kfree(tracker);
+}
+
+static bool tracker_submit_bio(struct bio *bio)
+{
+ struct blkfilter *flt = bio->bi_bdev->bd_filter;
+ struct tracker *tracker = container_of(flt, struct tracker, filter);
+ sector_t count = bio_sectors(bio);
+ struct bvec_iter copy_iter;
+
+ if (!op_is_write(bio_op(bio)) || !count)
+ return false;
+
+ copy_iter = bio->bi_iter;
+ if (bio_flagged(bio, BIO_REMAPPED))
+ copy_iter.bi_sector -= bio->bi_bdev->bd_start_sect;
+
+ if (cbt_map_set(tracker->cbt_map, copy_iter.bi_sector, count) ||
+ !atomic_read(&tracker->snapshot_is_taken))
+ return false;
+ /*
+ * The diff_area is not blocked from releasing now, because
+ * changing the value of the snapshot_is_taken is performed when
+ * the block device queue is frozen in tracker_release_snapshot().
+ */
+ if (diff_area_is_corrupted(tracker->diff_area))
+ return false;
+
+ return diff_area_cow(bio, tracker->diff_area, ©_iter);
+}
+
+static struct blkfilter *tracker_attach(struct block_device *bdev)
+{
+ struct tracker *tracker = NULL;
+ struct cbt_map *cbt_map;
+
+ pr_debug("Creating tracker for device [%u:%u]\n",
+ MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev));
+
+ cbt_map = cbt_map_create(bdev);
+ if (!cbt_map) {
+ pr_err("Failed to create CBT map for device [%u:%u]\n",
+ MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev));
+ return ERR_PTR(-ENOMEM);
+ }
+
+ tracker = kzalloc(sizeof(struct tracker), GFP_KERNEL);
+ if (tracker == NULL) {
+ cbt_map_destroy(cbt_map);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ mutex_init(&tracker->ctl_lock);
+ INIT_LIST_HEAD(&tracker->link);
+ kref_init(&tracker->kref);
+ tracker->dev_id = bdev->bd_dev;
+ atomic_set(&tracker->snapshot_is_taken, false);
+ tracker->cbt_map = cbt_map;
+ tracker->diff_area = NULL;
+
+ pr_debug("New tracker for device [%u:%u] was created\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+
+ return &tracker->filter;
+}
+
+static void tracker_detach(struct blkfilter *flt)
+{
+ struct tracker *tracker = container_of(flt, struct tracker, filter);
+
+ pr_debug("Detach tracker from device [%u:%u]\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+
+ tracker_put(tracker);
+}
+
+static int ctl_cbtinfo(struct tracker *tracker, __u8 __user *buf, __u32 *plen)
+{
+ struct cbt_map *cbt_map = tracker->cbt_map;
+ struct blksnap_cbtinfo arg;
+
+ if (!cbt_map)
+ return -ESRCH;
+
+ if (*plen < sizeof(arg))
+ return -EINVAL;
+
+ arg.device_capacity = (__u64)(cbt_map->device_capacity << SECTOR_SHIFT);
+ arg.block_size = (__u32)(1 << cbt_map->blk_size_shift);
+ arg.block_count = (__u32)cbt_map->blk_count;
+ export_uuid(arg.generation_id.b, &cbt_map->generation_id);
+ arg.changes_number = (__u8)cbt_map->snap_number_previous;
+
+ if (copy_to_user(buf, &arg, sizeof(arg)))
+ return -ENODATA;
+
+ *plen = sizeof(arg);
+ return 0;
+}
+
+static int ctl_cbtmap(struct tracker *tracker, __u8 __user *buf, __u32 *plen)
+{
+ struct cbt_map *cbt_map = tracker->cbt_map;
+ struct blksnap_cbtmap arg;
+
+ if (!cbt_map)
+ return -ESRCH;
+
+ if (unlikely(cbt_map->is_corrupted)) {
+ pr_err("CBT table was corrupted\n");
+ return -EFAULT;
+ }
+
+ if (*plen < sizeof(arg))
+ return -EINVAL;
+
+ if (copy_from_user(&arg, buf, sizeof(arg)))
+ return -ENODATA;
+
+ if (arg.length > (cbt_map->blk_count - arg.offset))
+ return -ENODATA;
+
+ if (copy_to_user(u64_to_user_ptr(arg.buffer),
+ cbt_map->read_map + arg.offset, arg.length))
+
+ return -EINVAL;
+
+ *plen = 0;
+ return 0;
+}
+static int ctl_cbtdirty(struct tracker *tracker, __u8 __user *buf, __u32 *plen)
+{
+ struct cbt_map *cbt_map = tracker->cbt_map;
+ struct blksnap_cbtdirty arg;
+ unsigned int inx;
+
+ if (!cbt_map)
+ return -ESRCH;
+
+ if (*plen < sizeof(arg))
+ return -EINVAL;
+
+ if (copy_from_user(&arg, buf, sizeof(arg)))
+ return -ENODATA;
+
+ for (inx = 0; inx < arg.count; inx++) {
+ struct blksnap_sectors range;
+ int ret;
+
+ if (copy_from_user(&range, u64_to_user_ptr(arg.dirty_sectors),
+ sizeof(range)))
+ return -ENODATA;
+
+ ret = cbt_map_set_both(cbt_map, range.offset, range.count);
+ if (ret)
+ return ret;
+ }
+ *plen = 0;
+ return 0;
+}
+static int ctl_snapshotadd(struct tracker *tracker,
+ __u8 __user *buf, __u32 *plen)
+{
+ struct blksnap_snapshotadd arg;
+
+ if (*plen < sizeof(arg))
+ return -EINVAL;
+
+ if (copy_from_user(&arg, buf, sizeof(arg)))
+ return -ENODATA;
+
+ *plen = 0;
+ return snapshot_add_device((uuid_t *)&arg.id, tracker);
+}
+static int ctl_snapshotinfo(struct tracker *tracker,
+ __u8 __user *buf, __u32 *plen)
+{
+ struct blksnap_snapshotinfo arg = {0};
+
+ if (*plen < sizeof(arg))
+ return -EINVAL;
+
+ if (copy_from_user(&arg, buf, sizeof(arg)))
+ return -ENODATA;
+
+
+ if (tracker->diff_area && diff_area_is_corrupted(tracker->diff_area))
+ arg.error_code = tracker->diff_area->error_code;
+ else
+ arg.error_code = 0;
+
+ if (tracker->snap_disk)
+ strncpy(arg.image, tracker->snap_disk->disk_name, IMAGE_DISK_NAME_LEN);
+
+ if (copy_to_user(buf, &arg, sizeof(arg)))
+ return -ENODATA;
+
+ *plen = sizeof(arg);
+ return 0;
+}
+
+static int (*const ctl_table[])(struct tracker *tracker,
+ __u8 __user *buf, __u32 *plen) = {
+ ctl_cbtinfo,
+ ctl_cbtmap,
+ ctl_cbtdirty,
+ ctl_snapshotadd,
+ ctl_snapshotinfo,
+};
+
+static int tracker_ctl(struct blkfilter *flt, const unsigned int cmd,
+ __u8 __user *buf, __u32 *plen)
+{
+ int ret = 0;
+ struct tracker *tracker = container_of(flt, struct tracker, filter);
+
+ if (cmd > ARRAY_SIZE(ctl_table))
+ return -ENOTTY;
+
+ mutex_lock(&tracker->ctl_lock);
+ ret = ctl_table[cmd](tracker, buf, plen);
+ mutex_unlock(&tracker->ctl_lock);
+
+ return ret;
+}
+
+static struct blkfilter_operations tracker_ops = {
+ .owner = THIS_MODULE,
+ .name = "blksnap",
+ .attach = tracker_attach,
+ .detach = tracker_detach,
+ .ctl = tracker_ctl,
+ .submit_bio = tracker_submit_bio,
+};
+
+int tracker_take_snapshot(struct tracker *tracker)
+{
+ int ret = 0;
+ bool cbt_reset_needed = false;
+ struct block_device *orig_bdev = tracker->diff_area->orig_bdev;
+ sector_t capacity;
+ unsigned int current_flag;
+
+ blk_mq_freeze_queue(orig_bdev->bd_queue);
+ current_flag = memalloc_noio_save();
+
+ if (tracker->cbt_map->is_corrupted) {
+ cbt_reset_needed = true;
+ pr_warn("Corrupted CBT table detected. CBT fault\n");
+ }
+
+ capacity = bdev_nr_sectors(orig_bdev);
+ if (tracker->cbt_map->device_capacity != capacity) {
+ cbt_reset_needed = true;
+ pr_warn("Device resize detected. CBT fault\n");
+ }
+
+ if (cbt_reset_needed) {
+ ret = cbt_map_reset(tracker->cbt_map, capacity);
+ if (ret) {
+ pr_err("Failed to create tracker. errno=%d\n",
+ abs(ret));
+ return ret;
+ }
+ }
+
+ cbt_map_switch(tracker->cbt_map);
+ atomic_set(&tracker->snapshot_is_taken, true);
+
+ memalloc_noio_restore(current_flag);
+ blk_mq_unfreeze_queue(orig_bdev->bd_queue);
+
+ return 0;
+}
+
+void tracker_release_snapshot(struct tracker *tracker)
+{
+ struct diff_area *diff_area = tracker->diff_area;
+
+ if (unlikely(!diff_area))
+ return;
+
+ snapimage_free(tracker);
+
+ blk_mq_freeze_queue(diff_area->orig_bdev->bd_queue);
+
+ pr_debug("Tracker for device [%u:%u] release snapshot\n",
+ MAJOR(tracker->dev_id), MINOR(tracker->dev_id));
+
+ atomic_set(&tracker->snapshot_is_taken, false);
+ tracker->diff_area = NULL;
+
+ blk_mq_unfreeze_queue(diff_area->orig_bdev->bd_queue);
+
+ diff_area_put(diff_area);
+}
+
+int __init tracker_init(void)
+{
+ pr_debug("Register filter '%s'", tracker_ops.name);
+
+ return blkfilter_register(&tracker_ops);
+}
+
+void tracker_done(void)
+{
+ pr_debug("Unregister filter '%s'", tracker_ops.name);
+
+ blkfilter_unregister(&tracker_ops);
+}
diff --git a/drivers/block/blksnap/tracker.h b/drivers/block/blksnap/tracker.h
new file mode 100644
index 000000000000..dbf8295f9518
--- /dev/null
+++ b/drivers/block/blksnap/tracker.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_TRACKER_H
+#define __BLKSNAP_TRACKER_H
+
+#include <linux/blk-filter.h>
+#include <linux/kref.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/rwsem.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+
+struct cbt_map;
+struct diff_area;
+
+/**
+ * struct tracker - Tracker for a block device.
+ *
+ * @filter:
+ * The block device filter structure.
+ * @ctl_lock:
+ * The mutex blocks simultaneous management of the tracker from different
+ * treads.
+ * @link:
+ * List header. Allows to combine trackers into a list in a snapshot.
+ * @kref:
+ * The link counter allows to control the lifetime of the tracker.
+ * @dev_id:
+ * Original block device ID.
+ * @snapshot_is_taken:
+ * Indicates that a snapshot was taken for the device whose I/O unit are
+ * handled by this tracker.
+ * @cbt_map:
+ * Pointer to a change block tracker map.
+ * @diff_area:
+ * Pointer to a difference area.
+ * @snap_disk:
+ * Snapshot image disk.
+ *
+ * The goal of the tracker is to handle I/O unit. The tracker detectes
+ * the range of sectors that will change and transmits them to the CBT map
+ * and to the difference area.
+ */
+struct tracker {
+ struct blkfilter filter;
+ struct mutex ctl_lock;
+ struct list_head link;
+ struct kref kref;
+ dev_t dev_id;
+
+ atomic_t snapshot_is_taken;
+
+ struct cbt_map *cbt_map;
+ struct diff_area *diff_area;
+ struct gendisk *snap_disk;
+};
+
+int __init tracker_init(void);
+void tracker_done(void);
+
+void tracker_free(struct kref *kref);
+static inline void tracker_put(struct tracker *tracker)
+{
+ if (likely(tracker))
+ kref_put(&tracker->kref, tracker_free);
+};
+static inline void tracker_get(struct tracker *tracker)
+{
+ kref_get(&tracker->kref);
+};
+int tracker_take_snapshot(struct tracker *tracker);
+void tracker_release_snapshot(struct tracker *tracker);
+
+#endif /* __BLKSNAP_TRACKER_H */
--
2.20.1
Contains callback functions for loading and unloading the module and
implementation of module management interface functions. The module
parameters and other mandatory declarations for the kernel module are
also defined.
Co-developed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Sergei Shtepa <[email protected]>
---
MAINTAINERS | 1 +
drivers/block/blksnap/main.c | 483 +++++++++++++++++++++++++++++++++
drivers/block/blksnap/params.h | 16 ++
3 files changed, 500 insertions(+)
create mode 100644 drivers/block/blksnap/main.c
create mode 100644 drivers/block/blksnap/params.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 76b14ad604dc..a26eee956aec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3594,6 +3594,7 @@ M: Sergei Shtepa <[email protected]>
L: [email protected]
S: Supported
F: Documentation/block/blksnap.rst
+F: drivers/block/blksnap/*
F: include/uapi/linux/blksnap.h
BLOCK LAYER
diff --git a/drivers/block/blksnap/main.c b/drivers/block/blksnap/main.c
new file mode 100644
index 000000000000..7bb37c191fda
--- /dev/null
+++ b/drivers/block/blksnap/main.c
@@ -0,0 +1,483 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/build_bug.h>
+#include <uapi/linux/blksnap.h>
+#include "snapimage.h"
+#include "snapshot.h"
+#include "tracker.h"
+#include "chunk.h"
+#include "params.h"
+
+/*
+ * The power of 2 for minimum tracking block size.
+ * If we make the tracking block size small, we will get detailed information
+ * about the changes, but the size of the change tracker table will be too
+ * large, which will lead to inefficient memory usage.
+ */
+static unsigned int tracking_block_minimum_shift = 16;
+
+/*
+ * The maximum number of tracking blocks.
+ * A table is created to store information about the status of all tracking
+ * blocks in RAM. So, if the size of the tracking block is small, then the size
+ * of the table turns out to be large and memory is consumed inefficiently.
+ * As the size of the block device grows, the size of the tracking block
+ * size should also grow. For this purpose, the limit of the maximum
+ * number of block size is set.
+ */
+static unsigned int tracking_block_maximum_count = 2097152;
+
+/*
+ * The power of 2 for maximum tracking block size.
+ * On very large capacity disks, the block size may be too large. To prevent
+ * this, the maximum block size is limited.
+ * If the limit on the maximum block size has been reached, then the number of
+ * blocks may exceed the tracking_block_maximum_count.
+ */
+static unsigned int tracking_block_maximum_shift = 26;
+
+/*
+ * The power of 2 for minimum chunk size.
+ * The size of the chunk depends on how much data will be copied to the
+ * difference storage when at least one sector of the block device is changed.
+ * If the size is small, then small I/O units will be generated, which will
+ * reduce performance. Too large a chunk size will lead to inefficient use of
+ * the difference storage.
+ */
+static unsigned int chunk_minimum_shift = 18;
+
+/*
+ * The power of 2 for maximum number of chunks.
+ * To store information about the state of the chunks, a table is created
+ * in RAM. So, if the size of the chunk is small, then the size of the table
+ * turns out to be large and memory is consumed inefficiently.
+ * As the size of the block device grows, the size of the chunk should also
+ * grow. For this purpose, the maximum number of chunks is set.
+ * The table expands dynamically when new chunks are allocated. Therefore,
+ * memory consumption also depends on the intensity of writing to the block
+ * device under the snapshot.
+ */
+static unsigned int chunk_maximum_count_shift = 40;
+
+/*
+ * The power of 2 for maximum chunk size.
+ * On very large capacity disks, the block size may be too large. To prevent
+ * this, the maximum block size is limited.
+ * If the limit on the maximum block size has been reached, then the number of
+ * blocks may exceed the chunk_maximum_count.
+ */
+static unsigned int chunk_maximum_shift = 26;
+/*
+ * The maximum number of chunks in queue.
+ * The chunk is not immediately stored to the difference storage. The chunks
+ * are put in a store queue. The store queue allows to postpone the operation
+ * of storing a chunks data to the difference storage and perform it later in
+ * the worker thread.
+ */
+static unsigned int chunk_maximum_in_queue = 16;
+
+/*
+ * The size of the pool of preallocated difference buffers.
+ * A buffer can be allocated for each chunk. After use, this buffer is not
+ * released immediately, but is sent to the pool of free buffers.
+ * However, if there are too many free buffers in the pool, then these free
+ * buffers will be released immediately.
+ */
+static unsigned int free_diff_buffer_pool_size = 128;
+
+/*
+ * The minimum allowable size of the difference storage in sectors.
+ * The difference storage is a part of the disk space allocated for storing
+ * snapshot data. If there is less free space in the storage than the minimum,
+ * an event is generated about the lack of free space.
+ */
+static unsigned int diff_storage_minimum = 2097152;
+
+#define VERSION_STR "2.0.0.0"
+static const struct blksnap_version version = {
+ .major = 2,
+ .minor = 0,
+ .revision = 0,
+ .build = 0,
+};
+
+unsigned int get_tracking_block_minimum_shift(void)
+{
+ return tracking_block_minimum_shift;
+}
+
+unsigned int get_tracking_block_maximum_shift(void)
+{
+ return tracking_block_maximum_shift;
+}
+
+unsigned int get_tracking_block_maximum_count(void)
+{
+ return tracking_block_maximum_count;
+}
+
+unsigned int get_chunk_minimum_shift(void)
+{
+ return chunk_minimum_shift;
+}
+
+unsigned int get_chunk_maximum_shift(void)
+{
+ return chunk_maximum_shift;
+}
+
+unsigned long get_chunk_maximum_count(void)
+{
+ return (1ul << chunk_maximum_count_shift);
+}
+
+unsigned int get_chunk_maximum_in_queue(void)
+{
+ return chunk_maximum_in_queue;
+}
+
+unsigned int get_free_diff_buffer_pool_size(void)
+{
+ return free_diff_buffer_pool_size;
+}
+
+unsigned int get_diff_storage_minimum(void)
+{
+ return diff_storage_minimum;
+}
+
+static int ioctl_version(unsigned long arg)
+{
+ struct blksnap_version __user *user_version =
+ (struct blksnap_version __user *)arg;
+
+ if (copy_to_user(user_version, &version, sizeof(version))) {
+ pr_err("Unable to get version: invalid user buffer\n");
+ return -ENODATA;
+ }
+
+ return 0;
+}
+
+static_assert(sizeof(uuid_t) == sizeof(struct blksnap_uuid),
+ "Invalid size of struct blksnap_uuid.");
+
+static int ioctl_snapshot_create(unsigned long arg)
+{
+ struct blksnap_uuid __user *user_id = (struct blksnap_uuid __user *)arg;
+ uuid_t kernel_id;
+ int ret;
+
+ ret = snapshot_create(&kernel_id);
+ if (ret)
+ return ret;
+
+ if (copy_to_user(user_id->b, kernel_id.b, sizeof(uuid_t))) {
+ pr_err("Unable to create snapshot: invalid user buffer\n");
+ return -ENODATA;
+ }
+
+ return 0;
+}
+
+static int ioctl_snapshot_destroy(unsigned long arg)
+{
+ struct blksnap_uuid __user *user_id = (struct blksnap_uuid __user *)arg;
+ uuid_t kernel_id;
+
+ if (copy_from_user(kernel_id.b, user_id->b, sizeof(uuid_t))) {
+ pr_err("Unable to destroy snapshot: invalid user buffer\n");
+ return -ENODATA;
+ }
+
+ return snapshot_destroy(&kernel_id);
+}
+
+static int ioctl_snapshot_append_storage(unsigned long arg)
+{
+ int ret;
+ struct blksnap_snapshot_append_storage __user *uarg =
+ (struct blksnap_snapshot_append_storage __user *)arg;
+ struct blksnap_snapshot_append_storage karg;
+ char *bdev_path = NULL;
+
+ pr_debug("Append difference storage\n");
+
+ if (copy_from_user(&karg, uarg, sizeof(karg))) {
+ pr_err("Unable to append difference storage: invalid user buffer\n");
+ return -EINVAL;
+ }
+
+ bdev_path = strndup_user(u64_to_user_ptr(karg.bdev_path),
+ karg.bdev_path_size);
+ if (IS_ERR(bdev_path)) {
+ pr_err("Unable to append difference storage: invalid block device name buffer\n");
+ return PTR_ERR(bdev_path);
+ }
+
+ ret = snapshot_append_storage((uuid_t *)karg.id.b, bdev_path,
+ u64_to_user_ptr(karg.ranges), karg.count);
+ kfree(bdev_path);
+ return ret;
+}
+
+static int ioctl_snapshot_take(unsigned long arg)
+{
+ struct blksnap_uuid __user *user_id = (struct blksnap_uuid __user *)arg;
+ uuid_t kernel_id;
+
+ if (copy_from_user(kernel_id.b, user_id->b, sizeof(uuid_t))) {
+ pr_err("Unable to take snapshot: invalid user buffer\n");
+ return -ENODATA;
+ }
+
+ return snapshot_take(&kernel_id);
+}
+
+static int ioctl_snapshot_collect(unsigned long arg)
+{
+ int ret;
+ struct blksnap_snapshot_collect karg;
+
+ if (copy_from_user(&karg, (const void __user *)arg, sizeof(karg))) {
+ pr_err("Unable to collect available snapshots: invalid user buffer\n");
+ return -ENODATA;
+ }
+
+ ret = snapshot_collect(&karg.count, u64_to_user_ptr(karg.ids));
+
+ if (copy_to_user((void __user *)arg, &karg, sizeof(karg))) {
+ pr_err("Unable to collect available snapshots: invalid user buffer\n");
+ return -ENODATA;
+ }
+
+ return ret;
+}
+
+static_assert(sizeof(struct blksnap_snapshot_event) == 4096,
+ "The size struct blksnap_snapshot_event should be equal to the size of the page.");
+
+static int ioctl_snapshot_wait_event(unsigned long arg)
+{
+ int ret = 0;
+ struct blksnap_snapshot_event __user *uarg =
+ (struct blksnap_snapshot_event __user *)arg;
+ struct blksnap_snapshot_event *karg;
+ struct event *ev;
+
+ karg = kzalloc(sizeof(struct blksnap_snapshot_event), GFP_KERNEL);
+ if (!karg)
+ return -ENOMEM;
+
+ /* Copy only snapshot ID and timeout*/
+ if (copy_from_user(karg, uarg, sizeof(uuid_t) + sizeof(__u32))) {
+ pr_err("Unable to get snapshot event. Invalid user buffer\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ev = snapshot_wait_event((uuid_t *)karg->id.b, karg->timeout_ms);
+ if (IS_ERR(ev)) {
+ ret = PTR_ERR(ev);
+ goto out;
+ }
+
+ pr_debug("Received event=%lld code=%d data_size=%d\n", ev->time,
+ ev->code, ev->data_size);
+ karg->code = ev->code;
+ karg->time_label = ev->time;
+
+ if (ev->data_size > sizeof(karg->data)) {
+ pr_err("Event size %d is too big\n", ev->data_size);
+ ret = -ENOSPC;
+ /* If we can't copy all the data, we copy only part of it. */
+ }
+ memcpy(karg->data, ev->data, ev->data_size);
+ event_free(ev);
+
+ if (copy_to_user(uarg, karg, sizeof(struct blksnap_snapshot_event))) {
+ pr_err("Unable to get snapshot event. Invalid user buffer\n");
+ ret = -EINVAL;
+ }
+out:
+ kfree(karg);
+
+ return ret;
+}
+
+static int (*const blksnap_ioctl_table[])(unsigned long arg) = {
+ ioctl_version,
+ ioctl_snapshot_create,
+ ioctl_snapshot_destroy,
+ ioctl_snapshot_append_storage,
+ ioctl_snapshot_take,
+ ioctl_snapshot_collect,
+ ioctl_snapshot_wait_event,
+};
+
+static_assert(
+ sizeof(blksnap_ioctl_table) ==
+ ((blksnap_ioctl_snapshot_wait_event + 1) * sizeof(void *)),
+ "The size of table blksnap_ioctl_table does not match the enum blksnap_ioctl.");
+
+static long ctrl_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ int nr = _IOC_NR(cmd);
+
+ if (nr > (sizeof(blksnap_ioctl_table) / sizeof(void *)))
+ return -ENOTTY;
+
+ if (!blksnap_ioctl_table[nr])
+ return -ENOTTY;
+
+ return blksnap_ioctl_table[nr](arg);
+}
+
+static const struct file_operations blksnap_ctrl_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = ctrl_unlocked_ioctl,
+};
+
+static struct miscdevice blksnap_ctrl_misc = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = BLKSNAP_CTL,
+ .fops = &blksnap_ctrl_fops,
+};
+
+static int __init parameters_init(void)
+{
+ pr_debug("tracking_block_minimum_shift: %d\n",
+ tracking_block_minimum_shift);
+ pr_debug("tracking_block_maximum_shift: %d\n",
+ tracking_block_maximum_shift);
+ pr_debug("tracking_block_maximum_count: %d\n",
+ tracking_block_maximum_count);
+
+ pr_debug("chunk_minimum_shift: %d\n", chunk_minimum_shift);
+ pr_debug("chunk_maximum_shift: %d\n", chunk_maximum_shift);
+ pr_debug("chunk_maximum_count_shift: %u\n", chunk_maximum_count_shift);
+
+ pr_debug("chunk_maximum_in_queue: %d\n", chunk_maximum_in_queue);
+ pr_debug("free_diff_buffer_pool_size: %d\n",
+ free_diff_buffer_pool_size);
+ pr_debug("diff_storage_minimum: %d\n", diff_storage_minimum);
+
+ if (tracking_block_maximum_shift < tracking_block_minimum_shift) {
+ tracking_block_maximum_shift = tracking_block_minimum_shift;
+ pr_warn("fixed tracking_block_maximum_shift: %d\n",
+ tracking_block_maximum_shift);
+ }
+
+ if (chunk_maximum_shift < chunk_minimum_shift) {
+ chunk_maximum_shift = chunk_minimum_shift;
+ pr_warn("fixed chunk_maximum_shift: %d\n",
+ chunk_maximum_shift);
+ }
+
+ /*
+ * The XArray is used to store chunks. And 'unsigned long' is used as
+ * chunk number parameter. So, The number of chunks cannot exceed the
+ * limits of ULONG_MAX.
+ */
+ if (sizeof(unsigned long) < 4u)
+ chunk_maximum_count_shift = min(16u, chunk_maximum_count_shift);
+ else if (sizeof(unsigned long) == 4)
+ chunk_maximum_count_shift = min(32u, chunk_maximum_count_shift);
+ else if (sizeof(unsigned long) >= 8)
+ chunk_maximum_count_shift = min(64u, chunk_maximum_count_shift);
+
+ return 0;
+}
+
+static int __init blksnap_init(void)
+{
+ int ret;
+
+ pr_debug("Loading\n");
+ pr_debug("Version: %s\n", VERSION_STR);
+
+ ret = parameters_init();
+ if (ret)
+ return ret;
+
+ ret = chunk_init();
+ if (ret)
+ goto fail_chunk_init;
+
+ ret = tracker_init();
+ if (ret)
+ goto fail_tracker_init;
+
+ ret = misc_register(&blksnap_ctrl_misc);
+ if (ret)
+ goto fail_misc_register;
+
+ return 0;
+
+fail_misc_register:
+ tracker_done();
+fail_tracker_init:
+ chunk_done();
+fail_chunk_init:
+
+ return ret;
+}
+
+static void __exit blksnap_exit(void)
+{
+ pr_debug("Unloading module\n");
+
+ misc_deregister(&blksnap_ctrl_misc);
+
+ chunk_done();
+ snapshot_done();
+ tracker_done();
+
+ pr_debug("Module was unloaded\n");
+}
+
+module_init(blksnap_init);
+module_exit(blksnap_exit);
+
+module_param_named(tracking_block_minimum_shift, tracking_block_minimum_shift,
+ uint, 0644);
+MODULE_PARM_DESC(tracking_block_minimum_shift,
+ "The power of 2 for minimum tracking block size");
+module_param_named(tracking_block_maximum_count, tracking_block_maximum_count,
+ uint, 0644);
+MODULE_PARM_DESC(tracking_block_maximum_count,
+ "The maximum number of tracking blocks");
+module_param_named(tracking_block_maximum_shift, tracking_block_maximum_shift,
+ uint, 0644);
+MODULE_PARM_DESC(tracking_block_maximum_shift,
+ "The power of 2 for maximum trackings block size");
+module_param_named(chunk_minimum_shift, chunk_minimum_shift, uint, 0644);
+MODULE_PARM_DESC(chunk_minimum_shift,
+ "The power of 2 for minimum chunk size");
+module_param_named(chunk_maximum_count_shift, chunk_maximum_count_shift,
+ uint, 0644);
+MODULE_PARM_DESC(chunk_maximum_count_shift,
+ "The power of 2 for maximum number of chunks");
+module_param_named(chunk_maximum_shift, chunk_maximum_shift, uint, 0644);
+MODULE_PARM_DESC(chunk_maximum_shift,
+ "The power of 2 for maximum snapshots chunk size");
+module_param_named(chunk_maximum_in_queue, chunk_maximum_in_queue, uint, 0644);
+MODULE_PARM_DESC(chunk_maximum_in_queue,
+ "The maximum number of chunks in store queue");
+module_param_named(free_diff_buffer_pool_size, free_diff_buffer_pool_size,
+ uint, 0644);
+MODULE_PARM_DESC(free_diff_buffer_pool_size,
+ "The size of the pool of preallocated difference buffers");
+module_param_named(diff_storage_minimum, diff_storage_minimum, uint, 0644);
+MODULE_PARM_DESC(diff_storage_minimum,
+ "The minimum allowable size of the difference storage in sectors");
+
+MODULE_DESCRIPTION("Block Device Snapshots Module");
+MODULE_VERSION(VERSION_STR);
+MODULE_AUTHOR("Veeam Software Group GmbH");
+MODULE_LICENSE("GPL");
diff --git a/drivers/block/blksnap/params.h b/drivers/block/blksnap/params.h
new file mode 100644
index 000000000000..85606e1a8746
--- /dev/null
+++ b/drivers/block/blksnap/params.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2023 Veeam Software Group GmbH */
+#ifndef __BLKSNAP_PARAMS_H
+#define __BLKSNAP_PARAMS_H
+
+unsigned int get_tracking_block_minimum_shift(void);
+unsigned int get_tracking_block_maximum_shift(void);
+unsigned int get_tracking_block_maximum_count(void);
+unsigned int get_chunk_minimum_shift(void);
+unsigned int get_chunk_maximum_shift(void);
+unsigned long get_chunk_maximum_count(void);
+unsigned int get_chunk_maximum_in_queue(void);
+unsigned int get_free_diff_buffer_pool_size(void);
+unsigned int get_diff_storage_minimum(void);
+
+#endif /* __BLKSNAP_PARAMS_H */
--
2.20.1
I'm of course a little byassed by having spent a lot of my own time
on this, but this version now looks ready to merge to me:
Acked-by: Christoph Hellwig <[email protected]>
But as Jens just merged my series to reopen the open flag we'll also
need to fold this in:
diff --git a/drivers/block/blksnap/diff_area.c b/drivers/block/blksnap/diff_area.c
index 169fa003b6d66d..0848c947591508 100644
--- a/drivers/block/blksnap/diff_area.c
+++ b/drivers/block/blksnap/diff_area.c
@@ -128,7 +128,7 @@ void diff_area_free(struct kref *kref)
xa_destroy(&diff_area->chunk_map);
if (diff_area->orig_bdev) {
- blkdev_put(diff_area->orig_bdev, FMODE_READ | FMODE_WRITE);
+ blkdev_put(diff_area->orig_bdev, NULL);
diff_area->orig_bdev = NULL;
}
@@ -214,7 +214,8 @@ struct diff_area *diff_area_new(dev_t dev_id, struct diff_storage *diff_storage)
pr_debug("Open device [%u:%u]\n", MAJOR(dev_id), MINOR(dev_id));
- bdev = blkdev_get_by_dev(dev_id, FMODE_READ | FMODE_WRITE, NULL, NULL);
+ bdev = blkdev_get_by_dev(dev_id, BLK_OPEN_READ | BLK_OPEN_WRITE, NULL,
+ NULL);
if (IS_ERR(bdev)) {
int err = PTR_ERR(bdev);
@@ -224,7 +225,7 @@ struct diff_area *diff_area_new(dev_t dev_id, struct diff_storage *diff_storage)
diff_area = kzalloc(sizeof(struct diff_area), GFP_KERNEL);
if (!diff_area) {
- blkdev_put(bdev, FMODE_READ | FMODE_WRITE);
+ blkdev_put(bdev, NULL);
return ERR_PTR(-ENOMEM);
}
diff --git a/drivers/block/blksnap/diff_storage.c b/drivers/block/blksnap/diff_storage.c
index 1787fa6931a816..f3814474b9804a 100644
--- a/drivers/block/blksnap/diff_storage.c
+++ b/drivers/block/blksnap/diff_storage.c
@@ -123,7 +123,7 @@ void diff_storage_free(struct kref *kref)
}
while ((storage_bdev = first_storage_bdev(diff_storage))) {
- blkdev_put(storage_bdev->bdev, FMODE_READ | FMODE_WRITE);
+ blkdev_put(storage_bdev->bdev, NULL);
list_del(&storage_bdev->link);
kfree(storage_bdev);
}
@@ -138,7 +138,7 @@ static struct block_device *diff_storage_add_storage_bdev(
struct storage_bdev *storage_bdev, *existing_bdev = NULL;
struct block_device *bdev;
- bdev = blkdev_get_by_path(bdev_path, FMODE_READ | FMODE_WRITE,
+ bdev = blkdev_get_by_path(bdev_path, BLK_OPEN_READ | BLK_OPEN_WRITE,
NULL, NULL);
if (IS_ERR(bdev)) {
pr_err("Failed to open device. errno=%ld\n", PTR_ERR(bdev));
@@ -153,14 +153,14 @@ static struct block_device *diff_storage_add_storage_bdev(
spin_unlock(&diff_storage->lock);
if (existing_bdev->bdev == bdev) {
- blkdev_put(bdev, FMODE_READ | FMODE_WRITE);
+ blkdev_put(bdev, NULL);
return existing_bdev->bdev;
}
storage_bdev = kzalloc(sizeof(struct storage_bdev) +
strlen(bdev_path) + 1, GFP_KERNEL);
if (!storage_bdev) {
- blkdev_put(bdev, FMODE_READ | FMODE_WRITE);
+ blkdev_put(bdev, NULL);
return ERR_PTR(-ENOMEM);
}
On Mon, Jun 12, 2023 at 03:52:17PM +0200, Sergei Shtepa wrote:
> Hi all.
>
> I am happy to offer a improved version of the Block Devices Snapshots
> Module. It allows to create non-persistent snapshots of any block devices.
> The main purpose of such snapshots is to provide backups of block devices.
> See more in Documentation/block/blksnap.rst.
How does blksnap interact with blk-crypto?
I.e., what happens if a bio with a ->bi_crypt_context set is submitted to a
block device that has blksnap active?
If you are unfamiliar with blk-crypto, please read
Documentation/block/inline-encryption.rst
It looks like blksnap hooks into the block layer directly, via the new
"blkfilter" mechanism. I'm concerned that it might ignore ->bi_crypt_context
and write data to the disk in plaintext, when it is supposed to be encrypted.
- Eric
On Mon, Jun 12, 2023 at 09:19:11AM -0700, Eric Biggers wrote:
> On Mon, Jun 12, 2023 at 03:52:17PM +0200, Sergei Shtepa wrote:
> > Hi all.
> >
> > I am happy to offer a improved version of the Block Devices Snapshots
> > Module. It allows to create non-persistent snapshots of any block devices.
> > The main purpose of such snapshots is to provide backups of block devices.
> > See more in Documentation/block/blksnap.rst.
>
> How does blksnap interact with blk-crypto?
>
> I.e., what happens if a bio with a ->bi_crypt_context set is submitted to a
> block device that has blksnap active?
>
> If you are unfamiliar with blk-crypto, please read
> Documentation/block/inline-encryption.rst
>
> It looks like blksnap hooks into the block layer directly, via the new
> "blkfilter" mechanism. I'm concerned that it might ignore ->bi_crypt_context
> and write data to the disk in plaintext, when it is supposed to be encrypted.
Yeah. Same for integrity. I guess for now the best would be to
not allow attaching a filter to block devices that have encryption or
integrity enabled and then look into that as a separate project fully
reviewed by the respective experts.
On 6/12/23 18:19, Eric Biggers wrote:
> This is the first time you've received an email from this sender
> [email protected], please exercise caution when clicking on links or opening
> attachments.
>
>
> On Mon, Jun 12, 2023 at 03:52:17PM +0200, Sergei Shtepa wrote:
> > Hi all.
> >
> > I am happy to offer a improved version of the Block Devices Snapshots
> > Module. It allows to create non-persistent snapshots of any block devices.
> > The main purpose of such snapshots is to provide backups of block devices.
> > See more in Documentation/block/blksnap.rst.
>
> How does blksnap interact with blk-crypto?
>
> I.e., what happens if a bio with a ->bi_crypt_context set is submitted to a
> block device that has blksnap active?
>
> If you are unfamiliar with blk-crypto, please read
> Documentation/block/inline-encryption.rst
Thank you, this is an important point. Yes, that's right.
The current version of blksnap can cause blk-crypto to malfunction while
holding a snapshot. When handling bios from the file system, the
->bi_crypt_context is preserved. But the bio requests serving the snapshot
are executed without context. I think that the snapshot will be unreadable.
But I don't see any obstacles in the way of blksnap and blk-crypto
compatibility. If DM implements support for blk-crypto, then the same
principle can be applied for blksnap. I think that the integration of
blksnap with blk-crypto may be one of the stages of further development.
The dm-crypto should work properly.
It is noteworthy that in 7 years of using the out-of-tree module to take
a snapshot, I have not encountered cases of such problems.
But incompatibility with blk-crypto is possible, this is already a pain
for some users. I will request this information from our support team.
>
> It looks like blksnap hooks into the block layer directly, via the new
> "blkfilter" mechanism. I'm concerned that it might ignore ->bi_crypt_context
> and write data to the disk in plaintext, when it is supposed to be encrypted.
No. The "blkfilter" mechanism should not affect the operation of blk-crypto.
It does not change the bio.
Only a module that has been attached and provides its own filtering algorithm,
such as blksnap, can violate the logic of blk-crypto.
Therefore, until the blksnap module is loaded, blk-crypto should work as before.
On Mon, Jun 12, 2023 at 03:52:21PM +0200, Sergei Shtepa wrote:
> The header file contains a set of declarations, structures and control
> requests (ioctl) that allows to manage the module from the user space.
>
> Co-developed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Christoph Hellwig <[email protected]>
> Tested-by: Donald Buczek <[email protected]>
> Signed-off-by: Sergei Shtepa <[email protected]>
> ---
> MAINTAINERS | 1 +
> include/uapi/linux/blksnap.h | 421 +++++++++++++++++++++++++++++++++++
> 2 files changed, 422 insertions(+)
> create mode 100644 include/uapi/linux/blksnap.h
.....
> +/**
> + * struct blksnap_snapshot_append_storage - Argument for the
> + * &IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE control.
> + *
> + * @id:
> + * Snapshot ID.
> + * @bdev_path:
> + * Device path string buffer.
> + * @bdev_path_size:
> + * Device path string buffer size.
> + * @count:
> + * Size of @ranges in the number of &struct blksnap_sectors.
> + * @ranges:
> + * Pointer to the array of &struct blksnap_sectors.
> + */
> +struct blksnap_snapshot_append_storage {
> + struct blksnap_uuid id;
> + __u64 bdev_path;
> + __u32 bdev_path_size;
> + __u32 count;
> + __u64 ranges;
> +};
> +
> +/**
> + * define IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE - Append storage to the
> + * difference storage of the snapshot.
> + *
> + * The snapshot difference storage can be set either before or after creating
> + * the snapshot images. This allows to dynamically expand the difference
> + * storage while holding the snapshot.
> + *
> + * Return: 0 if succeeded, negative errno otherwise.
> + */
> +#define IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE \
> + _IOW(BLKSNAP, blksnap_ioctl_snapshot_append_storage, \
> + struct blksnap_snapshot_append_storage)
That's an API I'm extremely uncomfortable with. We've learnt the
lesson *many times* that userspace physical mappings of underlying
file storage are unreliable.
i.e. This is reliant on userspace telling the kernel the physical
mapping of the filesystem file to block device LBA space and then
providing a guarantee (somehow) that the mapping will always remain
unchanged. i.e. It's reliant on passing FIEMAP data from the
filesystem to userspace and then back into the kernel without it
becoming stale and somehow providing a guarantee that nothing (not
even the filesystem doing internal garbage collection) will change
it.
It is reliant on userspace detecting shared blocks in files and
avoiding them; it's reliant on userspace never being able to read,
write or modify that file; it's reliant on the -filesystem- never
modifying the layout of that file; it's even reliant on a internal
filesystem state that has to be locked down before the block mapping
can be delegated to a third party for IO control.
Further, we can't allow userspace to have any read access to the
snapshot file even after it is no longer in use by the blksnap
driver. The contents of the file will span multiple security
contexts, contain sensitive data, etc and so it's contents must
never be exposed to userspace. We cannot rely on userspace to delete
it safely after use and hence we have to protect it's contents
from exposure to userspace, too.
We already have a mechanism that provides all these guarantees to a
third party kernel subsystem: swap files.
We already have a trusted path in the kernel to allow internal block
mapping of a swap file to be retreived by the mm subsystem. We also
have an inode flag that protects it such files against access and
modification from anything other than internal kernel IO paths. We
also allow them to be allocated as unwritten extents using
fallocate() and we are never converted to written whilist in use as
a swapfile. Hence the contents of them cannot be exposed to
userspace even if the swapfile flag is removed and owner/permission
changes are made to the file after it is released by the kernel.
Swap files are an intrinsically safe mechanism for delegating fixed
file mappings to kernel subsystems that have requirements for
secure, trusted storage that userspace cannot tamper with.
I note that the code behind the
IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE ends up in
diff_storage_add_range(), which allocates an extent structure for
each range and links it into a linked list for later use.
This is effectively the same structure that the mm swapfile code
uses. It provides a swap_info_struct and a struct file to the
filesystem via aops->swap_activate. The filesystem then iterates the
extent list for the file and calls add_swap_extent() for each
physical range in the file. The mm code then allocates a new extent
structure for the range and links it into the extent rbtree in the
swap_info_struct. This is the mapping it uses later on in the IO
path.
Adding a similar, more generic mapping operation that allows a
private structure and a callback to the provided would allow the
filesystem to provide this callback directly to subsystems like
blksnap. Essentially diff_storage_add_range() becomes the iterator
callback for blksnap. This makes the whole "userspace provides the
mapping" problem goes away and we can use the swapfile mechanisms to
provide all the other guarantees the kernel needs to ensure it can
trust the contents and mappings of the blksnap snapshot files....
Thoughts?
-Dave.
--
Dave Chinner
[email protected]
On Wed, Jun 14, 2023 at 08:25:15AM +1000, Dave Chinner wrote:
> > + * Return: 0 if succeeded, negative errno otherwise.
> > + */
> > +#define IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE \
> > + _IOW(BLKSNAP, blksnap_ioctl_snapshot_append_storage, \
> > + struct blksnap_snapshot_append_storage)
>
> That's an API I'm extremely uncomfortable with. We've learnt the
> lesson *many times* that userspace physical mappings of underlying
> file storage are unreliable.
>
> i.e. This is reliant on userspace telling the kernel the physical
> mapping of the filesystem file to block device LBA space and then
> providing a guarantee (somehow) that the mapping will always remain
> unchanged. i.e. It's reliant on passing FIEMAP data from the
> filesystem to userspace and then back into the kernel without it
> becoming stale and somehow providing a guarantee that nothing (not
> even the filesystem doing internal garbage collection) will change
> it.
Hmm, I never thought of this API as used on files that somewhere
had a logical to physical mapping applied to them.
Sergey, is that the indtended use case? If so we really should
be going through the file system using direct I/O.
On 6/14/23 08:26, Christoph Hellwig wrote:
> Subject:
> Re: [PATCH v5 04/11] blksnap: header file of the module interface
> From:
> Christoph Hellwig <[email protected]>
> Date:
> 6/14/23, 08:26
>
> To:
> Dave Chinner <[email protected]>
> CC:
> Sergei Shtepa <[email protected]>, [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>
>
>
> On Wed, Jun 14, 2023 at 08:25:15AM +1000, Dave Chinner wrote:
>>> + * Return: 0 if succeeded, negative errno otherwise.
>>> + */
>>> +#define IOCTL_BLKSNAP_SNAPSHOT_APPEND_STORAGE \
>>> + _IOW(BLKSNAP, blksnap_ioctl_snapshot_append_storage, \
>>> + struct blksnap_snapshot_append_storage)
>> That's an API I'm extremely uncomfortable with. We've learnt the
>> lesson *many times* that userspace physical mappings of underlying
>> file storage are unreliable.
>>
>> i.e. This is reliant on userspace telling the kernel the physical
>> mapping of the filesystem file to block device LBA space and then
>> providing a guarantee (somehow) that the mapping will always remain
>> unchanged. i.e. It's reliant on passing FIEMAP data from the
>> filesystem to userspace and then back into the kernel without it
>> becoming stale and somehow providing a guarantee that nothing (not
>> even the filesystem doing internal garbage collection) will change
>> it.
> Hmm, I never thought of this API as used on files that somewhere
> had a logical to physical mapping applied to them.
>
> Sergey, is that the indtended use case? If so we really should
> be going through the file system using direct I/O.
>
Hi!
Thank you, Dave, for such a detailed comment.
Yes, everything is really as you described.
This code worked quite successfully for the veeamsnap module, on the
basis of which blksnap was created. Indeed, such an allocation of an
area on a block device using a file does not look safe.
We've already discussed this with Donald Buczek <[email protected]>.
Link: https://github.com/veeam/blksnap/issues/57#issuecomment-1576569075
And I have planned work on moving to a more secure ioctl in the future.
Link: https://github.com/veeam/blksnap/issues/61
Now, thanks to Dave, it becomes clear to me how to solve this problem best.
swapfile is a good example of how to do it right.
Fixing this vulnerability will entail transferring the algorithm for
allocating the difference storage from the user-space to the blksnap code.
The changes are quite significant. The UAPI will be changed.
So I agree that the blksnap module is not good enough for upstream yet.
On Wed, Jun 14, 2023 at 11:26:20AM +0200, Sergei Shtepa wrote:
> This code worked quite successfully for the veeamsnap module, on the
> basis of which blksnap was created. Indeed, such an allocation of an
> area on a block device using a file does not look safe.
>
> We've already discussed this with Donald Buczek <[email protected]>.
> Link: https://github.com/veeam/blksnap/issues/57#issuecomment-1576569075
> And I have planned work on moving to a more secure ioctl in the future.
> Link: https://github.com/veeam/blksnap/issues/61
>
> Now, thanks to Dave, it becomes clear to me how to solve this problem best.
> swapfile is a good example of how to do it right.
I don't actually think swapfile is a very good idea, in fact the Linux
swap code in general is not a very good place to look for inspirations
:)
IFF the usage is always to have a whole file for the diff storage the
over all API is very simple - just pass a fd to the kernel for the area,
and then use in-kernel direct I/O on it. Now if that file should also
be able to reside on the same file system that the snapshot is taken
of things get a little more complicated, because writes to it also need
to automatically set the BIO_REFFED flag. I have some ideas for that
and will share some draft code with you.
On 6/14/23 16:07, Christoph Hellwig wrote:
> I don't actually think swapfile is a very good idea, in fact the Linux
> swap code in general is not a very good place to look for inspirations
> ????
Perhaps. I haven't looked at the code yet. But I like the idea of
protecting the file from any access from the user-space, as it is
implemented for swapfile.
>
> IFF the usage is always to have a whole file for the diff storage the
> over all API is very simple - just pass a fd to the kernel for the area,
> and then use in-kernel direct I/O on it. Now if that file should also
> be able to reside on the same file system that the snapshot is taken
> of things get a little more complicated, because writes to it also need
> to automatically set the BIO_REFFED flag.
There is definitely no task to create a difference storage file on the
same block device for which the snapshot is being created. The file can
be created on any block device.
Still, the variant when a whole partition is allocated for the difference
storage can also be useful.
> I have some ideas for that and will share some draft code with you.
I'll be glad.
On Tue, Jun 13, 2023 at 12:12:19PM +0200, Sergei Shtepa wrote:
> On 6/12/23 18:19, Eric Biggers wrote:
> > This is the first time you've received an email from this sender
> > [email protected], please exercise caution when clicking on links or opening
> > attachments.
> >
> >
> > On Mon, Jun 12, 2023 at 03:52:17PM +0200, Sergei Shtepa wrote:
> > > Hi all.
> > >
> > > I am happy to offer a improved version of the Block Devices Snapshots
> > > Module. It allows to create non-persistent snapshots of any block devices.
> > > The main purpose of such snapshots is to provide backups of block devices.
> > > See more in Documentation/block/blksnap.rst.
> >
> > How does blksnap interact with blk-crypto?
> >
> > I.e., what happens if a bio with a ->bi_crypt_context set is submitted to a
> > block device that has blksnap active?
> >
> > If you are unfamiliar with blk-crypto, please read
> > Documentation/block/inline-encryption.rst
>
> Thank you, this is an important point. Yes, that's right.
> The current version of blksnap can cause blk-crypto to malfunction while
> holding a snapshot. When handling bios from the file system, the
> ->bi_crypt_context is preserved. But the bio requests serving the snapshot
> are executed without context. I think that the snapshot will be unreadable.
Well not only would the resulting snapshot be unreadable, but plaintext data
would be written to disk, contrary to the intent of the submitter of the bios.
That would be a security vulnerability.
If the initial version of blksnap isn't going to be compatible with blk-crypto,
that is tolerable for now, but there needs to be an explicit check to cause an
error to be returned if the two features are combined, before anything is
written to disk.
- Eric
On Wed, Jun 14, 2023 at 07:07:16AM -0700, Christoph Hellwig wrote:
> On Wed, Jun 14, 2023 at 11:26:20AM +0200, Sergei Shtepa wrote:
> > This code worked quite successfully for the veeamsnap module, on the
> > basis of which blksnap was created. Indeed, such an allocation of an
> > area on a block device using a file does not look safe.
> >
> > We've already discussed this with Donald Buczek <[email protected]>.
> > Link: https://github.com/veeam/blksnap/issues/57#issuecomment-1576569075
> > And I have planned work on moving to a more secure ioctl in the future.
> > Link: https://github.com/veeam/blksnap/issues/61
> >
> > Now, thanks to Dave, it becomes clear to me how to solve this problem best.
> > swapfile is a good example of how to do it right.
>
> I don't actually think swapfile is a very good idea, in fact the Linux
> swap code in general is not a very good place to look for inspirations
> :)
Yeah, the swapfile implementation isn't very nice, I was really just
using it as an example of how we can implement the requirements of
block mapping delegation in a safe manner to a kernel subsystem.
I think the important part is the swapfile inode flag, because that
is what keeps userspace from being able to screw with the file while
the kernel is using it and allows us to do read/write IO to
unwritten extents without converting them to written...
> IFF the usage is always to have a whole file for the diff storage the
> over all API is very simple - just pass a fd to the kernel for the area,
> and then use in-kernel direct I/O on it.
Yeah, I was thinking a fd is a better choice for the UAPI as it
frees up the kernel implementation, and it doesn't need us to pass a
separate bdev identifier in the ioctl. It also means we can pass a
regular file or a block device and the kernel code doesn't need to
care that they are different.
If you think direct IO is a better idea, then I have no objection to
that - I haven't looked into the implementation that deeply at this
point. I wanted to get an understanding of how all the pieces went
together first, so all I've read is the documentation and looked at
the UAPI.
I made a leap from that: the documentation keeps talking about using
files a the filesystem for the difference storage, but the only UAPI
for telling the kernel about storage regions it can use is this
physical bdev LBA mapping ioctl. Hence if file storage is being
used....
> Now if that file should also
> be able to reside on the same file system that the snapshot is taken
> of things get a little more complicated, because writes to it also need
> to automatically set the BIO_REFFED flag. I have some ideas for that
> and will share some draft code with you.
Cool, I look forward to the updates; I know of a couple of
applications that could make use of this functionality right
away....
Cheers,
Dave.
--
Dave Chinner
[email protected]
Hi,
?? 2023/06/12 21:52, Sergei Shtepa д??:
> The block device filtering mechanism is an API that allows to attach
> block device filters. Block device filters allow perform additional
> processing for I/O units.
>
> The idea of handling I/O units on block devices is not new. Back in the
> 2.6 kernel, there was an undocumented possibility of handling I/O units
> by substituting the make_request_fn() function, which belonged to the
> request_queue structure. But none of the in-tree kernel modules used
> this feature, and it was eliminated in the 5.10 kernel.
>
> The block device filtering mechanism returns the ability to handle I/O
> units. It is possible to safely attach filter to a block device "on the
> fly" without changing the structure of block devices stack.
>
> Co-developed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Christoph Hellwig <[email protected]>
> Tested-by: Donald Buczek <[email protected]>
> Signed-off-by: Sergei Shtepa <[email protected]>
> ---
> MAINTAINERS | 3 +
> block/Makefile | 3 +-
> block/bdev.c | 1 +
> block/blk-core.c | 27 ++++
> block/blk-filter.c | 213 ++++++++++++++++++++++++++++++++
> block/blk.h | 11 ++
> block/genhd.c | 10 ++
> block/ioctl.c | 7 ++
> block/partitions/core.c | 10 ++
> include/linux/blk-filter.h | 51 ++++++++
> include/linux/blk_types.h | 2 +
> include/linux/blkdev.h | 1 +
> include/uapi/linux/blk-filter.h | 35 ++++++
> include/uapi/linux/fs.h | 3 +
> 14 files changed, 376 insertions(+), 1 deletion(-)
> create mode 100644 block/blk-filter.c
> create mode 100644 include/linux/blk-filter.h
> create mode 100644 include/uapi/linux/blk-filter.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index d801b8985b43..8336b6143a71 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3585,6 +3585,9 @@ M: Sergei Shtepa <[email protected]>
> L: [email protected]
> S: Supported
> F: Documentation/block/blkfilter.rst
> +F: block/blk-filter.c
> +F: include/linux/blk-filter.h
> +F: include/uapi/linux/blk-filter.h
>
> BLOCK LAYER
> M: Jens Axboe <[email protected]>
> diff --git a/block/Makefile b/block/Makefile
> index 46ada9dc8bbf..041c54eb0240 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -9,7 +9,8 @@ obj-y := bdev.o fops.o bio.o elevator.o blk-core.o blk-sysfs.o \
> blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
> blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \
> genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o \
> - disk-events.o blk-ia-ranges.o early-lookup.o
> + disk-events.o blk-ia-ranges.o early-lookup.o \
> + blk-filter.o
>
> obj-$(CONFIG_BOUNCE) += bounce.o
> obj-$(CONFIG_BLK_DEV_BSG_COMMON) += bsg.o
> diff --git a/block/bdev.c b/block/bdev.c
> index 5c46ff107706..369f73b6097a 100644
> --- a/block/bdev.c
> +++ b/block/bdev.c
> @@ -429,6 +429,7 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
> return NULL;
> }
> bdev->bd_disk = disk;
> + bdev->bd_filter = NULL;
> return bdev;
> }
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 2ae22bebeb3e..ede04c6ad021 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -18,6 +18,7 @@
> #include <linux/blkdev.h>
> #include <linux/blk-pm.h>
> #include <linux/blk-integrity.h>
> +#include <linux/blk-filter.h>
> #include <linux/highmem.h>
> #include <linux/mm.h>
> #include <linux/pagemap.h>
> @@ -586,8 +587,24 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
> return BLK_STS_OK;
> }
>
> +static bool submit_bio_filter(struct bio *bio)
> +{
> + if (bio_flagged(bio, BIO_FILTERED))
> + return false;
> +
> + bio_set_flag(bio, BIO_FILTERED);
> + return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
> +}
> +
> static void __submit_bio(struct bio *bio)
> {
> + /*
> + * If there is a filter driver attached, check if the BIO needs to go to
> + * the filter driver first, which can then pass on the bio or consume it.
> + */
> + if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
> + return;
> +
> if (unlikely(!blk_crypto_bio_prep(&bio)))
> return;
>
> @@ -677,6 +694,15 @@ static void __submit_bio_noacct_mq(struct bio *bio)
> current->bio_list = NULL;
> }
>
> +/**
> + * submit_bio_noacct_nocheck - re-submit a bio to the block device layer for I/O
> + * from block device filter.
> + * @bio: The bio describing the location in memory and on the device.
> + *
> + * This is a version of submit_bio() that shall only be used for I/O that is
> + * resubmitted to lower level by block device filters. All file systems and
> + * other upper level users of the block layer should use submit_bio() instead.
> + */
> void submit_bio_noacct_nocheck(struct bio *bio)
> {
> blk_cgroup_bio_start(bio);
> @@ -704,6 +730,7 @@ void submit_bio_noacct_nocheck(struct bio *bio)
> else
> __submit_bio_noacct(bio);
> }
> +EXPORT_SYMBOL_GPL(submit_bio_noacct_nocheck);
>
> /**
> * submit_bio_noacct - re-submit a bio to the block device layer for I/O
> diff --git a/block/blk-filter.c b/block/blk-filter.c
> new file mode 100644
> index 000000000000..bf31da6acf67
> --- /dev/null
> +++ b/block/blk-filter.c
> @@ -0,0 +1,213 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright (C) 2023 Veeam Software Group GmbH */
> +#include <linux/blk-filter.h>
> +#include <linux/blk-mq.h>
> +#include <linux/module.h>
> +
> +#include "blk.h"
> +
> +static LIST_HEAD(blkfilters);
> +static DEFINE_SPINLOCK(blkfilters_lock);
> +
> +static inline struct blkfilter_operations *__blkfilter_find(const char *name)
> +{
> + struct blkfilter_operations *ops;
> +
> + list_for_each_entry(ops, &blkfilters, link)
> + if (strncmp(ops->name, name, BLKFILTER_NAME_LENGTH) == 0)
> + return ops;
> +
> + return NULL;
> +}
> +
> +static inline struct blkfilter_operations *blkfilter_find_get(const char *name)
> +{
> + struct blkfilter_operations *ops;
> +
> + spin_lock(&blkfilters_lock);
> + ops = __blkfilter_find(name);
> + if (ops && !try_module_get(ops->owner))
> + ops = NULL;
> + spin_unlock(&blkfilters_lock);
> +
> + return ops;
> +}
> +
> +int blkfilter_ioctl_attach(struct block_device *bdev,
> + struct blkfilter_name __user *argp)
> +{
> + struct blkfilter_name name;
> + struct blkfilter_operations *ops;
> + struct blkfilter *flt;
> + int ret;
> +
> + if (copy_from_user(&name, argp, sizeof(name)))
> + return -EFAULT;
> +
> + ops = blkfilter_find_get(name.name);
> + if (!ops)
> + return -ENOENT;
> +
> + ret = freeze_bdev(bdev);
> + if (ret)
> + goto out_put_module;
> + blk_mq_freeze_queue(bdev->bd_queue);
> +
> + if (bdev->bd_filter) {
> + if (bdev->bd_filter->ops == ops)
> + ret = -EALREADY;
> + else
> + ret = -EBUSY;
> + goto out_unfreeze;
> + }
> +
> + flt = ops->attach(bdev);
> + if (IS_ERR(flt)) {
> + ret = PTR_ERR(flt);
> + goto out_unfreeze;
> + }
> +
> + flt->ops = ops;
> + bdev->bd_filter = flt;
> +
> +out_unfreeze:
> + blk_mq_unfreeze_queue(bdev->bd_queue);
> + thaw_bdev(bdev);
> +out_put_module:
> + if (ret)
> + module_put(ops->owner);
> + return ret;
> +}
> +
> +static void __blkfilter_detach(struct block_device *bdev)
> +{
> + struct blkfilter *flt = bdev->bd_filter;
> + const struct blkfilter_operations *ops = flt->ops;
> +
> + bdev->bd_filter = NULL;
> + ops->detach(flt);
> + module_put(ops->owner);
> +}
> +
> +void blkfilter_detach(struct block_device *bdev)
> +{
> + if (bdev->bd_filter) {
> + blk_mq_freeze_queue(bdev->bd_queue);
> + __blkfilter_detach(bdev);
> + blk_mq_unfreeze_queue(bdev->bd_queue);
> + }
> +}
> +
> +int blkfilter_ioctl_detach(struct block_device *bdev,
> + struct blkfilter_name __user *argp)
> +{
> + struct blkfilter_name name;
> + int error = 0;
> +
> + if (copy_from_user(&name, argp, sizeof(name)))
> + return -EFAULT;
> +
> + blk_mq_freeze_queue(bdev->bd_queue);
> + if (!bdev->bd_filter) {
> + error = -ENOENT;
> + goto out_unfreeze;
> + }
> + if (strncmp(bdev->bd_filter->ops->name, name.name,
> + BLKFILTER_NAME_LENGTH)) {
> + error = -EINVAL;
> + goto out_unfreeze;
> + }
> +
> + __blkfilter_detach(bdev);
> +out_unfreeze:
> + blk_mq_unfreeze_queue(bdev->bd_queue);
> + return error;
> +}
> +
> +int blkfilter_ioctl_ctl(struct block_device *bdev,
> + struct blkfilter_ctl __user *argp)
> +{
> + struct blkfilter_ctl ctl;
> + struct blkfilter *flt;
> + int ret;
> +
> + if (copy_from_user(&ctl, argp, sizeof(ctl)))
> + return -EFAULT;
> +
> + ret = blk_queue_enter(bdev_get_queue(bdev), 0);
> + if (ret)
> + return ret;
> +
> + flt = bdev->bd_filter;
> + if (!flt || strncmp(flt->ops->name, ctl.name, BLKFILTER_NAME_LENGTH)) {
> + ret = -ENOENT;
> + goto out_queue_exit;
> + }
> +
> + if (!flt->ops->ctl) {
> + ret = -ENOTTY;
> + goto out_queue_exit;
> + }
> +
> + ret = flt->ops->ctl(flt, ctl.cmd, u64_to_user_ptr(ctl.opt),
> + &ctl.optlen);
> +out_queue_exit:
> + blk_queue_exit(bdev_get_queue(bdev));
> + return ret;
> +}
> +
> +ssize_t blkfilter_show(struct block_device *bdev, char *buf)
> +{
> + ssize_t ret = 0;
> +
> + blk_mq_freeze_queue(bdev->bd_queue);
> + if (bdev->bd_filter)
> + ret = sprintf(buf, "%s\n", bdev->bd_filter->ops->name);
> + else
> + ret = sprintf(buf, "\n");
> + blk_mq_unfreeze_queue(bdev->bd_queue);
> +
> + return ret;
> +}
> +
> +/**
> + * blkfilter_register() - Register block device filter operations
> + * @ops: The operations to register.
> + *
> + * Return:
> + * 0 if succeeded,
> + * -EBUSY if a block device filter with the same name is already
> + * registered.
> + */
> +int blkfilter_register(struct blkfilter_operations *ops)
> +{
> + struct blkfilter_operations *found;
> + int ret = 0;
> +
> + spin_lock(&blkfilters_lock);
> + found = __blkfilter_find(ops->name);
> + if (found)
> + ret = -EBUSY;
> + else
> + list_add_tail(&ops->link, &blkfilters);
> + spin_unlock(&blkfilters_lock);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(blkfilter_register);
> +
> +/**
> + * blkfilter_unregister() - Unregister block device filter operations
> + * @ops: The operations to unregister.
> + *
> + * Important: before unloading, it is necessary to detach the filter from all
> + * block devices.
> + *
> + */
> +void blkfilter_unregister(struct blkfilter_operations *ops)
> +{
> + spin_lock(&blkfilters_lock);
> + list_del(&ops->link);
> + spin_unlock(&blkfilters_lock);
> +}
> +EXPORT_SYMBOL_GPL(blkfilter_unregister);
> diff --git a/block/blk.h b/block/blk.h
> index 9582fcd0df41..2c14fa938d8c 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -7,6 +7,8 @@
> #include <xen/xen.h>
> #include "blk-crypto-internal.h"
>
> +struct blkfilter_ctl;
> +struct blkfilter_name;
> struct elevator_type;
>
> /* Max future timer expiry for timeouts */
> @@ -454,6 +456,15 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg);
>
> extern const struct address_space_operations def_blk_aops;
>
> +int blkfilter_ioctl_attach(struct block_device *bdev,
> + struct blkfilter_name __user *argp);
> +int blkfilter_ioctl_detach(struct block_device *bdev,
> + struct blkfilter_name __user *argp);
> +int blkfilter_ioctl_ctl(struct block_device *bdev,
> + struct blkfilter_ctl __user *argp);
> +void blkfilter_detach(struct block_device *bdev);
> +ssize_t blkfilter_show(struct block_device *bdev, char *buf);
> +
> int disk_register_independent_access_ranges(struct gendisk *disk);
> void disk_unregister_independent_access_ranges(struct gendisk *disk);
>
> diff --git a/block/genhd.c b/block/genhd.c
> index 4e5fd6aaa883..d9aca0797886 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -25,6 +25,7 @@
> #include <linux/pm_runtime.h>
> #include <linux/badblocks.h>
> #include <linux/part_stat.h>
> +#include <linux/blk-filter.h>
> #include "blk-throttle.h"
>
> #include "blk.h"
> @@ -648,6 +649,7 @@ void del_gendisk(struct gendisk *disk)
> remove_inode_hash(part->bd_inode);
> fsync_bdev(part);
> __invalidate_device(part, true);
> + blkfilter_detach(part);
> }
> mutex_unlock(&disk->open_mutex);
>
> @@ -1033,6 +1035,12 @@ static ssize_t diskseq_show(struct device *dev,
> return sprintf(buf, "%llu\n", disk->diskseq);
> }
>
> +static ssize_t disk_filter_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + return blkfilter_show(dev_to_bdev(dev), buf);
> +}
> +
> static DEVICE_ATTR(range, 0444, disk_range_show, NULL);
> static DEVICE_ATTR(ext_range, 0444, disk_ext_range_show, NULL);
> static DEVICE_ATTR(removable, 0444, disk_removable_show, NULL);
> @@ -1046,6 +1054,7 @@ static DEVICE_ATTR(stat, 0444, part_stat_show, NULL);
> static DEVICE_ATTR(inflight, 0444, part_inflight_show, NULL);
> static DEVICE_ATTR(badblocks, 0644, disk_badblocks_show, disk_badblocks_store);
> static DEVICE_ATTR(diskseq, 0444, diskseq_show, NULL);
> +static DEVICE_ATTR(filter, 0444, disk_filter_show, NULL);
>
> #ifdef CONFIG_FAIL_MAKE_REQUEST
> ssize_t part_fail_show(struct device *dev,
> @@ -1092,6 +1101,7 @@ static struct attribute *disk_attrs[] = {
> &dev_attr_events_async.attr,
> &dev_attr_events_poll_msecs.attr,
> &dev_attr_diskseq.attr,
> + &dev_attr_filter.attr,
> #ifdef CONFIG_FAIL_MAKE_REQUEST
> &dev_attr_fail.attr,
> #endif
> diff --git a/block/ioctl.c b/block/ioctl.c
> index c7d7d4345edb..170020d1ce0e 100644
> --- a/block/ioctl.c
> +++ b/block/ioctl.c
> @@ -2,6 +2,7 @@
> #include <linux/capability.h>
> #include <linux/compat.h>
> #include <linux/blkdev.h>
> +#include <linux/blk-filter.h>
> #include <linux/export.h>
> #include <linux/gfp.h>
> #include <linux/blkpg.h>
> @@ -546,6 +547,12 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode,
> return blkdev_pr_preempt(bdev, argp, true);
> case IOC_PR_CLEAR:
> return blkdev_pr_clear(bdev, argp);
> + case BLKFILTER_ATTACH:
> + return blkfilter_ioctl_attach(bdev, argp);
> + case BLKFILTER_DETACH:
> + return blkfilter_ioctl_detach(bdev, argp);
> + case BLKFILTER_CTL:
> + return blkfilter_ioctl_ctl(bdev, argp);
> default:
> return -ENOIOCTLCMD;
> }
> diff --git a/block/partitions/core.c b/block/partitions/core.c
> index 87a21942d606..8e2834566c38 100644
> --- a/block/partitions/core.c
> +++ b/block/partitions/core.c
> @@ -10,6 +10,7 @@
> #include <linux/ctype.h>
> #include <linux/vmalloc.h>
> #include <linux/raid/detect.h>
> +#include <linux/blk-filter.h>
> #include "check.h"
>
> static int (*const check_part[])(struct parsed_partitions *) = {
> @@ -200,6 +201,12 @@ static ssize_t part_discard_alignment_show(struct device *dev,
> return sprintf(buf, "%u\n", bdev_discard_alignment(dev_to_bdev(dev)));
> }
>
> +static ssize_t part_filter_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + return blkfilter_show(dev_to_bdev(dev), buf);
> +}
> +
> static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
> static DEVICE_ATTR(start, 0444, part_start_show, NULL);
> static DEVICE_ATTR(size, 0444, part_size_show, NULL);
> @@ -208,6 +215,7 @@ static DEVICE_ATTR(alignment_offset, 0444, part_alignment_offset_show, NULL);
> static DEVICE_ATTR(discard_alignment, 0444, part_discard_alignment_show, NULL);
> static DEVICE_ATTR(stat, 0444, part_stat_show, NULL);
> static DEVICE_ATTR(inflight, 0444, part_inflight_show, NULL);
> +static DEVICE_ATTR(filter, 0444, part_filter_show, NULL);
> #ifdef CONFIG_FAIL_MAKE_REQUEST
> static struct device_attribute dev_attr_fail =
> __ATTR(make-it-fail, 0644, part_fail_show, part_fail_store);
> @@ -222,6 +230,7 @@ static struct attribute *part_attrs[] = {
> &dev_attr_discard_alignment.attr,
> &dev_attr_stat.attr,
> &dev_attr_inflight.attr,
> + &dev_attr_filter.attr,
> #ifdef CONFIG_FAIL_MAKE_REQUEST
> &dev_attr_fail.attr,
> #endif
> @@ -284,6 +293,7 @@ static void delete_partition(struct block_device *part)
>
> fsync_bdev(part);
> __invalidate_device(part, true);
> + blkfilter_detach(part);
bdev_disk_changed() is not handled, where delete_partition() and
add_partition() will be called, this means blkfilter for partiton will
be removed after partition rescan. Am I missing something?
Thanks,
Kuai
>
> drop_partition(part);
> }
> diff --git a/include/linux/blk-filter.h b/include/linux/blk-filter.h
> new file mode 100644
> index 000000000000..0afdb40f3bab
> --- /dev/null
> +++ b/include/linux/blk-filter.h
> @@ -0,0 +1,51 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (C) 2023 Veeam Software Group GmbH */
> +#ifndef _LINUX_BLK_FILTER_H
> +#define _LINUX_BLK_FILTER_H
> +
> +#include <uapi/linux/blk-filter.h>
> +
> +struct bio;
> +struct block_device;
> +struct blkfilter_operations;
> +
> +/**
> + * struct blkfilter - Block device filter.
> + *
> + * @ops: Block device filter operations.
> + *
> + * For each filtered block device, the filter creates a data structure
> + * associated with this device. The data in this structure is specific to the
> + * filter, but it must contain a pointer to the block device filter account.
> + */
> +struct blkfilter {
> + const struct blkfilter_operations *ops;
> +};
> +
> +/**
> + * struct blkfilter_operations - Block device filter operations.
> + *
> + * @link: Entry in the global list of filter drivers
> + * (must not be accessed by the driver).
> + * @owner: Module implementing the filter driver.
> + * @name: Name of the filter driver.
> + * @attach: Attach the filter driver to the block device.
> + * @detach: Detach the filter driver from the block device.
> + * @ctl: Send a control command to the filter driver.
> + * @submit_bio: Handle bio submissions to the filter driver.
> + */
> +struct blkfilter_operations {
> + struct list_head link;
> + struct module *owner;
> + const char *name;
> + struct blkfilter *(*attach)(struct block_device *bdev);
> + void (*detach)(struct blkfilter *flt);
> + int (*ctl)(struct blkfilter *flt, const unsigned int cmd,
> + __u8 __user *buf, __u32 *plen);
> + bool (*submit_bio)(struct bio *bio);
> +};
> +
> +int blkfilter_register(struct blkfilter_operations *ops);
> +void blkfilter_unregister(struct blkfilter_operations *ops);
> +
> +#endif /* _UAPI_LINUX_BLK_FILTER_H */
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index deb69eeab6bd..5ba313b3b11d 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -75,6 +75,7 @@ struct block_device {
> * path
> */
> struct device bd_device;
> + struct blkfilter *bd_filter;
> } __randomize_layout;
>
> #define bdev_whole(_bdev) \
> @@ -341,6 +342,7 @@ enum {
> BIO_QOS_MERGED, /* but went through rq_qos merge path */
> BIO_REMAPPED,
> BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */
> + BIO_FILTERED, /* bio has already been filtered */
> BIO_FLAG_LAST
> };
>
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index f4c339d9dd03..e7a4a4866792 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -842,6 +842,7 @@ void blk_request_module(dev_t devt);
>
> extern int blk_register_queue(struct gendisk *disk);
> extern void blk_unregister_queue(struct gendisk *disk);
> +void submit_bio_noacct_nocheck(struct bio *bio);
> void submit_bio_noacct(struct bio *bio);
> struct bio *bio_split_to_limits(struct bio *bio);
>
> diff --git a/include/uapi/linux/blk-filter.h b/include/uapi/linux/blk-filter.h
> new file mode 100644
> index 000000000000..18885dc1b717
> --- /dev/null
> +++ b/include/uapi/linux/blk-filter.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/* Copyright (C) 2023 Veeam Software Group GmbH */
> +#ifndef _UAPI_LINUX_BLK_FILTER_H
> +#define _UAPI_LINUX_BLK_FILTER_H
> +
> +#include <linux/types.h>
> +
> +#define BLKFILTER_NAME_LENGTH 32
> +
> +/**
> + * struct blkfilter_name - parameter for BLKFILTER_ATTACH and BLKFILTER_DETACH
> + * ioctl.
> + *
> + * @name: Name of block device filter.
> + */
> +struct blkfilter_name {
> + __u8 name[BLKFILTER_NAME_LENGTH];
> +};
> +
> +/**
> + * struct blkfilter_ctl - parameter for BLKFILTER_CTL ioctl
> + *
> + * @name: Name of block device filter.
> + * @cmd: The filter-specific operation code of the command.
> + * @optlen: Size of data at @opt.
> + * @opt: Userspace buffer with options.
> + */
> +struct blkfilter_ctl {
> + __u8 name[BLKFILTER_NAME_LENGTH];
> + __u32 cmd;
> + __u32 optlen;
> + __u64 opt;
> +};
> +
> +#endif /* _UAPI_LINUX_BLK_FILTER_H */
> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index b7b56871029c..7904f157b245 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -189,6 +189,9 @@ struct fsxattr {
> * A jump here: 130-136 are reserved for zoned block devices
> * (see uapi/linux/blkzoned.h)
> */
> +#define BLKFILTER_ATTACH _IOWR(0x12, 140, struct blkfilter_name)
> +#define BLKFILTER_DETACH _IOWR(0x12, 141, struct blkfilter_name)
> +#define BLKFILTER_CTL _IOWR(0x12, 142, struct blkfilter_ctl)
>
> #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */
> #define FIBMAP _IO(0x00,1) /* bmap access */
>
Hi,
在 2023/07/11 10:02, Yu Kuai 写道:
>> +static bool submit_bio_filter(struct bio *bio)
>> +{
>> + if (bio_flagged(bio, BIO_FILTERED))
>> + return false;
>> +
>> + bio_set_flag(bio, BIO_FILTERED);
>> + return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
>> +}
>> +
>> static void __submit_bio(struct bio *bio)
>> {
>> + /*
>> + * If there is a filter driver attached, check if the BIO needs
>> to go to
>> + * the filter driver first, which can then pass on the bio or
>> consume it.
>> + */
>> + if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
>> + return;
>> +
>> if (unlikely(!blk_crypto_bio_prep(&bio)))
>> return;
...
>> +static void __blkfilter_detach(struct block_device *bdev)
>> +{
>> + struct blkfilter *flt = bdev->bd_filter;
>> + const struct blkfilter_operations *ops = flt->ops;
>> +
>> + bdev->bd_filter = NULL;
>> + ops->detach(flt);
>> + module_put(ops->owner);
>> +}
>> +
>> +void blkfilter_detach(struct block_device *bdev)
>> +{
>> + if (bdev->bd_filter) {
>> + blk_mq_freeze_queue(bdev->bd_queue);
And this is not sate as well, for bio-based device, q_usage_counter is
not grabbed while submit_bio_filter() is called, hence there is a risk
of uaf from submit_bio_filter().
Thanks,
Kuai
Hi,
在 2023/07/12 18:04, Yu Kuai 写道:
> Hi,
>
> 在 2023/07/11 10:02, Yu Kuai 写道:
>
>>> +static bool submit_bio_filter(struct bio *bio)
>>> +{
>>> + if (bio_flagged(bio, BIO_FILTERED))
>>> + return false;
>>> +
>>> + bio_set_flag(bio, BIO_FILTERED);
>>> + return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
>>> +}
>>> +
>>> static void __submit_bio(struct bio *bio)
>>> {
>>> + /*
>>> + * If there is a filter driver attached, check if the BIO needs
>>> to go to
>>> + * the filter driver first, which can then pass on the bio or
>>> consume it.
>>> + */
>>> + if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
>>> + return;
>>> +
>>> if (unlikely(!blk_crypto_bio_prep(&bio)))
>>> return;
>
> ...
>
>>> +static void __blkfilter_detach(struct block_device *bdev)
>>> +{
>>> + struct blkfilter *flt = bdev->bd_filter;
>>> + const struct blkfilter_operations *ops = flt->ops;
>>> +
>>> + bdev->bd_filter = NULL;
>>> + ops->detach(flt);
>>> + module_put(ops->owner);
>>> +}
>>> +
>>> +void blkfilter_detach(struct block_device *bdev)
>>> +{
>>> + if (bdev->bd_filter) {
>>> + blk_mq_freeze_queue(bdev->bd_queue);
>
> And this is not sate as well, for bio-based device, q_usage_counter is
> not grabbed while submit_bio_filter() is called, hence there is a risk
> of uaf from submit_bio_filter().
And there is another question, can blkfilter_detach() from
del_gendisk/delete_partiton and ioctl concurrent? I think it's a
problem.
Thanks,
Kuai
>
> Thanks,
> Kuai
>
> .
>
On 7/11/23 04:02, Yu Kuai wrote:
> bdev_disk_changed() is not handled, where delete_partition() and
> add_partition() will be called, this means blkfilter for partiton will
> be removed after partition rescan. Am I missing something?
Yes, when the bdev_disk_changed() is called, all disk block devices
are deleted and new ones are re-created. Therefore, the information
about the attached filters will be lost. This is equivalent to
removing the disk and adding it back.
For the blksnap module, partition rescan will mean the loss of the
change trackers data. If a snapshot was created, then such
a partition rescan will cause the snapshot to be corrupted.
There was an idea to do filtering at the disk level,
but I abandoned it.
On 7/12/23 12:04, Yu Kuai wrote:
> Subject:
> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
> From:
> Yu Kuai <[email protected]>
> Date:
> 7/12/23, 12:04
>
> To:
> Yu Kuai <[email protected]>, Sergei Shtepa <[email protected]>, [email protected], [email protected], [email protected], [email protected]
> CC:
> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>
>
> Hi,
>
> 在 2023/07/11 10:02, Yu Kuai 写道:
>
>>> +static bool submit_bio_filter(struct bio *bio)
>>> +{
>>> + if (bio_flagged(bio, BIO_FILTERED))
>>> + return false;
>>> +
>>> + bio_set_flag(bio, BIO_FILTERED);
>>> + return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
>>> +}
>>> +
>>> static void __submit_bio(struct bio *bio)
>>> {
>>> + /*
>>> + * If there is a filter driver attached, check if the BIO needs to go to
>>> + * the filter driver first, which can then pass on the bio or consume it.
>>> + */
>>> + if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
>>> + return;
>>> +
>>> if (unlikely(!blk_crypto_bio_prep(&bio)))
>>> return;
>
> ...
>
>>> +static void __blkfilter_detach(struct block_device *bdev)
>>> +{
>>> + struct blkfilter *flt = bdev->bd_filter;
>>> + const struct blkfilter_operations *ops = flt->ops;
>>> +
>>> + bdev->bd_filter = NULL;
>>> + ops->detach(flt);
>>> + module_put(ops->owner);
>>> +}
>>> +
>>> +void blkfilter_detach(struct block_device *bdev)
>>> +{
>>> + if (bdev->bd_filter) {
>>> + blk_mq_freeze_queue(bdev->bd_queue);
>
> And this is not sate as well, for bio-based device, q_usage_counter is
> not grabbed while submit_bio_filter() is called, hence there is a risk
> of uaf from submit_bio_filter().
>
> Thanks,
> Kuai
>
Hi Kuai.
Indeed, the filter call is performed before calling bio_queue_enter().
I must admit, you are very attentive.
I didn't keep track of the change in the position of the
bio_queue_enter() function deeper on the stack.
I think I need to add a check for q_usage_counter, for the debug build.
So that I don't miss it in the future.
Hi.
On 7/12/23 14:34, Yu Kuai wrote:
> Subject:
> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
> From:
> Yu Kuai <[email protected]>
> Date:
> 7/12/23, 14:34
>
> To:
> Yu Kuai <[email protected]>, Sergei Shtepa <[email protected]>, [email protected], [email protected], [email protected], [email protected]
> CC:
> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>
>
> Hi,
>
> 在 2023/07/12 18:04, Yu Kuai 写道:
>> Hi,
>>
>> 在 2023/07/11 10:02, Yu Kuai 写道:
>>
>>>> +static bool submit_bio_filter(struct bio *bio)
>>>> +{
>>>> + if (bio_flagged(bio, BIO_FILTERED))
>>>> + return false;
>>>> +
>>>> + bio_set_flag(bio, BIO_FILTERED);
>>>> + return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
>>>> +}
>>>> +
>>>> static void __submit_bio(struct bio *bio)
>>>> {
>>>> + /*
>>>> + * If there is a filter driver attached, check if the BIO needs to go to
>>>> + * the filter driver first, which can then pass on the bio or consume it.
>>>> + */
>>>> + if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
>>>> + return;
>>>> +
>>>> if (unlikely(!blk_crypto_bio_prep(&bio)))
>>>> return;
>>
>> ...
>>
>>>> +static void __blkfilter_detach(struct block_device *bdev)
>>>> +{
>>>> + struct blkfilter *flt = bdev->bd_filter;
>>>> + const struct blkfilter_operations *ops = flt->ops;
>>>> +
>>>> + bdev->bd_filter = NULL;
>>>> + ops->detach(flt);
>>>> + module_put(ops->owner);
>>>> +}
>>>> +
>>>> +void blkfilter_detach(struct block_device *bdev)
>>>> +{
>>>> + if (bdev->bd_filter) {
>>>> + blk_mq_freeze_queue(bdev->bd_queue);
>>
>> And this is not sate as well, for bio-based device, q_usage_counter is
>> not grabbed while submit_bio_filter() is called, hence there is a risk
>> of uaf from submit_bio_filter().
>
> And there is another question, can blkfilter_detach() from
> del_gendisk/delete_partiton and ioctl concurrent? I think it's a
> problem.
>
Yes, it looks like if two threads execute the blkfilter_detach() function,
then a problem is possible. The blk_mq_freeze_queue() function does not
block threads.
But for this, it is necessary that the IOCTL for the block device and
its removal are performed simultaneously. Is this possible?
I suppose that using mutex bdev->bd_disk->open_mutex in
blkfilter_ioctl_attach(), blkfilter_ioctl_detach() and
blkfilter_ioctl_ctl() can fix the problem. What do you think?
> Thanks,
> Kuai
>>
>> Thanks,
>> Kuai
>>
>> .
>>
>
On 2023-06-12 15:52:21+0200, Sergei Shtepa wrote:
> [..]
> diff --git a/include/uapi/linux/blksnap.h b/include/uapi/linux/blksnap.h
> new file mode 100644
> index 000000000000..2fe3f2a43bc5
> --- /dev/null
> +++ b/include/uapi/linux/blksnap.h
> @@ -0,0 +1,421 @@
> [..]
> +/**
> + * struct blksnap_snapshotinfo - Result for the command
> + * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_snapshotinfo.
> + *
> + * @error_code:
> + * Zero if there were no errors while holding the snapshot.
> + * The error code -ENOSPC means that while holding the snapshot, a snapshot
> + * overflow situation has occurred. Other error codes mean other reasons
> + * for failure.
> + * The error code is reset when the device is added to a new snapshot.
> + * @image:
> + * If the snapshot was taken, it stores the block device name of the
> + * image, or empty string otherwise.
> + */
> +struct blksnap_snapshotinfo {
> + __s32 error_code;
> + __u8 image[IMAGE_DISK_NAME_LEN];
Nitpick:
Seems a bit weird to have a signed error code that is always negative.
Couldn't this be an unsigned number or directly return the error from
the ioctl() itself?
> +};
> +
> +/**
> + * DOC: Interface for managing snapshots
> + *
> + * Control commands that are transmitted through the blksnap module interface.
> + */
> +enum blksnap_ioctl {
> + blksnap_ioctl_version,
> + blksnap_ioctl_snapshot_create,
> + blksnap_ioctl_snapshot_destroy,
> + blksnap_ioctl_snapshot_append_storage,
> + blksnap_ioctl_snapshot_take,
> + blksnap_ioctl_snapshot_collect,
> + blksnap_ioctl_snapshot_wait_event,
> +};
> +
> +/**
> + * struct blksnap_version - Module version.
> + *
> + * @major:
> + * Version major part.
> + * @minor:
> + * Version minor part.
> + * @revision:
> + * Revision number.
> + * @build:
> + * Build number. Should be zero.
> + */
> +struct blksnap_version {
> + __u16 major;
> + __u16 minor;
> + __u16 revision;
> + __u16 build;
> +};
> +
> +/**
> + * define IOCTL_BLKSNAP_VERSION - Get module version.
> + *
> + * The version may increase when the API changes. But linking the user space
> + * behavior to the version code does not seem to be a good idea.
> + * To ensure backward compatibility, API changes should be made by adding new
> + * ioctl without changing the behavior of existing ones. The version should be
> + * used for logs.
> + *
> + * Return: 0 if succeeded, negative errno otherwise.
> + */
> +#define IOCTL_BLKSNAP_VERSION \
> + _IOW(BLKSNAP, blksnap_ioctl_version, struct blksnap_version)
Shouldn't this be _IOR()?
"_IOW means userland is writing and kernel is reading. _IOR
means userland is reading and kernel is writing."
The other ioctl definitions seem to need a review, too.
Hi,
在 2023/07/17 22:39, Sergei Shtepa 写道:
>
>
> On 7/11/23 04:02, Yu Kuai wrote:
>> bdev_disk_changed() is not handled, where delete_partition() and
>> add_partition() will be called, this means blkfilter for partiton will
>> be removed after partition rescan. Am I missing something?
>
> Yes, when the bdev_disk_changed() is called, all disk block devices
> are deleted and new ones are re-created. Therefore, the information
> about the attached filters will be lost. This is equivalent to
> removing the disk and adding it back.
>
> For the blksnap module, partition rescan will mean the loss of the
> change trackers data. If a snapshot was created, then such
> a partition rescan will cause the snapshot to be corrupted.
>
I haven't review blksnap code yet, but this sounds like a problem.
possible solutions I have in mind:
1. Store blkfilter for each partition from bdev_disk_changed() before
delete_partition(), and add blkfilter back after add_partition().
2. Store blkfilter from gendisk as a xarray, and protect it by
'open_mutex' like 'part_tbl', block_device can keep the pointer to
reference blkfilter so that performance from fast path is ok, and the
lifetime of blkfiter can be managed separately.
> There was an idea to do filtering at the disk level,
> but I abandoned it.
> .
>
I think it's better to do filtering at the partition level as well.
Thanks,
Kuai
Hi,
在 2023/07/18 1:39, Sergei Shtepa 写道:
> Hi.
>
> On 7/12/23 14:34, Yu Kuai wrote:
>> Subject:
>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>> From:
>> Yu Kuai <[email protected]>
>> Date:
>> 7/12/23, 14:34
>>
>> To:
>> Yu Kuai <[email protected]>, Sergei Shtepa <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>> CC:
>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>
>>
>> Hi,
>>
>> 在 2023/07/12 18:04, Yu Kuai 写道:
>>> Hi,
>>>
>>> 在 2023/07/11 10:02, Yu Kuai 写道:
>>>
>>>>> +static bool submit_bio_filter(struct bio *bio)
>>>>> +{
>>>>> + if (bio_flagged(bio, BIO_FILTERED))
>>>>> + return false;
>>>>> +
>>>>> + bio_set_flag(bio, BIO_FILTERED);
>>>>> + return bio->bi_bdev->bd_filter->ops->submit_bio(bio);
>>>>> +}
>>>>> +
>>>>> static void __submit_bio(struct bio *bio)
>>>>> {
>>>>> + /*
>>>>> + * If there is a filter driver attached, check if the BIO needs to go to
>>>>> + * the filter driver first, which can then pass on the bio or consume it.
>>>>> + */
>>>>> + if (bio->bi_bdev->bd_filter && submit_bio_filter(bio))
>>>>> + return;
>>>>> +
>>>>> if (unlikely(!blk_crypto_bio_prep(&bio)))
>>>>> return;
>>>
>>> ...
>>>
>>>>> +static void __blkfilter_detach(struct block_device *bdev)
>>>>> +{
>>>>> + struct blkfilter *flt = bdev->bd_filter;
>>>>> + const struct blkfilter_operations *ops = flt->ops;
>>>>> +
>>>>> + bdev->bd_filter = NULL;
>>>>> + ops->detach(flt);
>>>>> + module_put(ops->owner);
>>>>> +}
>>>>> +
>>>>> +void blkfilter_detach(struct block_device *bdev)
>>>>> +{
>>>>> + if (bdev->bd_filter) {
>>>>> + blk_mq_freeze_queue(bdev->bd_queue);
>>>
>>> And this is not sate as well, for bio-based device, q_usage_counter is
>>> not grabbed while submit_bio_filter() is called, hence there is a risk
>>> of uaf from submit_bio_filter().
>>
>> And there is another question, can blkfilter_detach() from
>> del_gendisk/delete_partiton and ioctl concurrent? I think it's a
>> problem.
>>
>
> Yes, it looks like if two threads execute the blkfilter_detach() function,
> then a problem is possible. The blk_mq_freeze_queue() function does not
> block threads.
> But for this, it is necessary that the IOCTL for the block device and
> its removal are performed simultaneously. Is this possible?
I think it's possible, ioctl only requires to open the disk/partition,
and it won't block disk removal:
t1: t2:
open dev
ioctl
remove dev
del_gendisk
blkfilter_detach blkfilter_detach
>
> I suppose that using mutex bdev->bd_disk->open_mutex in
> blkfilter_ioctl_attach(), blkfilter_ioctl_detach() and
> blkfilter_ioctl_ctl() can fix the problem. What do you think?
I think it's ok, and blkfilter ioctl must check disk_live() while
holding the lock.
Thanks,
Kuai
>
>
>> Thanks,
>> Kuai
>>>
>>> Thanks,
>>> Kuai
>>>
>>> .
>>>
>>
> .
>
Hi!
Thanks for the review.
On 7/17/23 20:57, Thomas Weißschuh wrote:
> Subject:
> Re: [PATCH v5 04/11] blksnap: header file of the module interface
> From:
> Thomas Weißschuh <[email protected]>
> Date:
> 7/17/23, 20:57
>
> To:
> Sergei Shtepa <[email protected]>
> CC:
> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>
>
>
> On 2023-06-12 15:52:21+0200, Sergei Shtepa wrote:
>
>> [..]
>> diff --git a/include/uapi/linux/blksnap.h b/include/uapi/linux/blksnap.h
>> new file mode 100644
>> index 000000000000..2fe3f2a43bc5
>> --- /dev/null
>> +++ b/include/uapi/linux/blksnap.h
>> @@ -0,0 +1,421 @@
>> [..]
>> +/**
>> + * struct blksnap_snapshotinfo - Result for the command
>> + * &blkfilter_ctl_blksnap.blkfilter_ctl_blksnap_snapshotinfo.
>> + *
>> + * @error_code:
>> + * Zero if there were no errors while holding the snapshot.
>> + * The error code -ENOSPC means that while holding the snapshot, a snapshot
>> + * overflow situation has occurred. Other error codes mean other reasons
>> + * for failure.
>> + * The error code is reset when the device is added to a new snapshot.
>> + * @image:
>> + * If the snapshot was taken, it stores the block device name of the
>> + * image, or empty string otherwise.
>> + */
>> +struct blksnap_snapshotinfo {
>> + __s32 error_code;
>> + __u8 image[IMAGE_DISK_NAME_LEN];
> Nitpick:
>
> Seems a bit weird to have a signed error code that is always negative.
> Couldn't this be an unsigned number or directly return the error from
> the ioctl() itself?
Yes, it's a good idea to pass the error code as an unsigned value.
And this positive value can be passed in case of successful execution
of ioctl(), but I would not like to put different error signs in one value.
>
>> +};
>> +
>> +/**
>> + * DOC: Interface for managing snapshots
>> + *
>> + * Control commands that are transmitted through the blksnap module interface.
>> + */
>> +enum blksnap_ioctl {
>> + blksnap_ioctl_version,
>> + blksnap_ioctl_snapshot_create,
>> + blksnap_ioctl_snapshot_destroy,
>> + blksnap_ioctl_snapshot_append_storage,
>> + blksnap_ioctl_snapshot_take,
>> + blksnap_ioctl_snapshot_collect,
>> + blksnap_ioctl_snapshot_wait_event,
>> +};
>> +
>> +/**
>> + * struct blksnap_version - Module version.
>> + *
>> + * @major:
>> + * Version major part.
>> + * @minor:
>> + * Version minor part.
>> + * @revision:
>> + * Revision number.
>> + * @build:
>> + * Build number. Should be zero.
>> + */
>> +struct blksnap_version {
>> + __u16 major;
>> + __u16 minor;
>> + __u16 revision;
>> + __u16 build;
>> +};
>> +
>> +/**
>> + * define IOCTL_BLKSNAP_VERSION - Get module version.
>> + *
>> + * The version may increase when the API changes. But linking the user space
>> + * behavior to the version code does not seem to be a good idea.
>> + * To ensure backward compatibility, API changes should be made by adding new
>> + * ioctl without changing the behavior of existing ones. The version should be
>> + * used for logs.
>> + *
>> + * Return: 0 if succeeded, negative errno otherwise.
>> + */
>> +#define IOCTL_BLKSNAP_VERSION \
>> + _IOW(BLKSNAP, blksnap_ioctl_version, struct blksnap_version)
> Shouldn't this be _IOR()?
>
> "_IOW means userland is writing and kernel is reading. _IOR
> means userland is reading and kernel is writing."
>
> The other ioctl definitions seem to need a review, too.
>
Yeah. I need to replace _IOR and _IOW in all ioctl.
Thanks!
Hi.
On 7/18/23 03:37, Yu Kuai wrote:
> Subject:
> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
> From:
> Yu Kuai <[email protected]>
> Date:
> 7/18/23, 03:37
>
> To:
> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
> CC:
> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>
>
> Hi,
>
> 在 2023/07/17 22:39, Sergei Shtepa 写道:
>>
>>
>> On 7/11/23 04:02, Yu Kuai wrote:
>>> bdev_disk_changed() is not handled, where delete_partition() and
>>> add_partition() will be called, this means blkfilter for partiton will
>>> be removed after partition rescan. Am I missing something?
>>
>> Yes, when the bdev_disk_changed() is called, all disk block devices
>> are deleted and new ones are re-created. Therefore, the information
>> about the attached filters will be lost. This is equivalent to
>> removing the disk and adding it back.
>>
>> For the blksnap module, partition rescan will mean the loss of the
>> change trackers data. If a snapshot was created, then such
>> a partition rescan will cause the snapshot to be corrupted.
>>
>
> I haven't review blksnap code yet, but this sounds like a problem.
I can't imagine a case where this could be a problem.
Partition rescan is possible only if the file system has not been
mounted on any of the disk partitions. Ioctl BLKRRPART will return
-EBUSY. Therefore, during normal operation of the system, rescan is
not performed.
And if the file systems have not been mounted, it is possible that
the disk partition structure has changed or the disk in the media
device has changed. In this case, it is better to detach the
filter, otherwise it may lead to incorrect operation of the module.
We can add prechange/postchange callback functions so that the
filter can track rescan process. But at the moment, this is not
necessary for the blksnap module.
Therefore, I will refrain from making changes for now.
>
> possible solutions I have in mind:
>
> 1. Store blkfilter for each partition from bdev_disk_changed() before
> delete_partition(), and add blkfilter back after add_partition().
>
> 2. Store blkfilter from gendisk as a xarray, and protect it by
> 'open_mutex' like 'part_tbl', block_device can keep the pointer to
> reference blkfilter so that performance from fast path is ok, and the
> lifetime of blkfiter can be managed separately.
>
>> There was an idea to do filtering at the disk level,
>> but I abandoned it.
>> .
>>
> I think it's better to do filtering at the partition level as well.
>
> Thanks,
> Kuai
>
Hi,
在 2023/07/18 19:25, Sergei Shtepa 写道:
> Hi.
>
> On 7/18/23 03:37, Yu Kuai wrote:
>> Subject:
>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>> From:
>> Yu Kuai <[email protected]>
>> Date:
>> 7/18/23, 03:37
>>
>> To:
>> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>> CC:
>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>
>>
>> Hi,
>>
>> 在 2023/07/17 22:39, Sergei Shtepa 写道:
>>>
>>>
>>> On 7/11/23 04:02, Yu Kuai wrote:
>>>> bdev_disk_changed() is not handled, where delete_partition() and
>>>> add_partition() will be called, this means blkfilter for partiton will
>>>> be removed after partition rescan. Am I missing something?
>>>
>>> Yes, when the bdev_disk_changed() is called, all disk block devices
>>> are deleted and new ones are re-created. Therefore, the information
>>> about the attached filters will be lost. This is equivalent to
>>> removing the disk and adding it back.
>>>
>>> For the blksnap module, partition rescan will mean the loss of the
>>> change trackers data. If a snapshot was created, then such
>>> a partition rescan will cause the snapshot to be corrupted.
>>>
>>
>> I haven't review blksnap code yet, but this sounds like a problem.
>
> I can't imagine a case where this could be a problem.
> Partition rescan is possible only if the file system has not been
> mounted on any of the disk partitions. Ioctl BLKRRPART will return
> -EBUSY. Therefore, during normal operation of the system, rescan is
> not performed.
> And if the file systems have not been mounted, it is possible that
> the disk partition structure has changed or the disk in the media
> device has changed. In this case, it is better to detach the
> filter, otherwise it may lead to incorrect operation of the module.
>
> We can add prechange/postchange callback functions so that the
> filter can track rescan process. But at the moment, this is not
> necessary for the blksnap module.
So you mean that blkfilter is only used for the case that partition
is mounted? (Or you mean that partition is opened)
Then, I think you mean that filter should only be used for the partition
that is opended? Otherwise, filter can be gone at any time since
partition rescan can be gone.
//user
1. attach filter
// other context rescan partition
2. mount fs
// user will found filter is gone.
Thanks,
Kuai
>
> Therefore, I will refrain from making changes for now.
>
>>
>> possible solutions I have in mind:
>>
>> 1. Store blkfilter for each partition from bdev_disk_changed() before
>> delete_partition(), and add blkfilter back after add_partition().
>>
>> 2. Store blkfilter from gendisk as a xarray, and protect it by
>> 'open_mutex' like 'part_tbl', block_device can keep the pointer to
>> reference blkfilter so that performance from fast path is ok, and the
>> lifetime of blkfiter can be managed separately.
>>
>>> There was an idea to do filtering at the disk level,
>>> but I abandoned it.
>>> .
>>>
>> I think it's better to do filtering at the partition level as well.
>>
>> Thanks,
>> Kuai
>>
> .
>
On 7/18/23 14:32, Yu Kuai wrote:
> Subject:
> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
> From:
> Yu Kuai <[email protected]>
> Date:
> 7/18/23, 14:32
>
> To:
> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
> CC:
> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>
>
> Hi,
>
> 在 2023/07/18 19:25, Sergei Shtepa 写道:
>> Hi.
>>
>> On 7/18/23 03:37, Yu Kuai wrote:
>>> Subject:
>>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>>> From:
>>> Yu Kuai <[email protected]>
>>> Date:
>>> 7/18/23, 03:37
>>>
>>> To:
>>> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>>> CC:
>>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>>
>>>
>>> Hi,
>>>
>>> 在 2023/07/17 22:39, Sergei Shtepa 写道:
>>>>
>>>>
>>>> On 7/11/23 04:02, Yu Kuai wrote:
>>>>> bdev_disk_changed() is not handled, where delete_partition() and
>>>>> add_partition() will be called, this means blkfilter for partiton will
>>>>> be removed after partition rescan. Am I missing something?
>>>>
>>>> Yes, when the bdev_disk_changed() is called, all disk block devices
>>>> are deleted and new ones are re-created. Therefore, the information
>>>> about the attached filters will be lost. This is equivalent to
>>>> removing the disk and adding it back.
>>>>
>>>> For the blksnap module, partition rescan will mean the loss of the
>>>> change trackers data. If a snapshot was created, then such
>>>> a partition rescan will cause the snapshot to be corrupted.
>>>>
>>>
>>> I haven't review blksnap code yet, but this sounds like a problem.
>>
>> I can't imagine a case where this could be a problem.
>> Partition rescan is possible only if the file system has not been
>> mounted on any of the disk partitions. Ioctl BLKRRPART will return
>> -EBUSY. Therefore, during normal operation of the system, rescan is
>> not performed.
>> And if the file systems have not been mounted, it is possible that
>> the disk partition structure has changed or the disk in the media
>> device has changed. In this case, it is better to detach the
>> filter, otherwise it may lead to incorrect operation of the module.
>>
>> We can add prechange/postchange callback functions so that the
>> filter can track rescan process. But at the moment, this is not
>> necessary for the blksnap module.
>
> So you mean that blkfilter is only used for the case that partition
> is mounted? (Or you mean that partition is opened)
>
> Then, I think you mean that filter should only be used for the partition
> that is opended? Otherwise, filter can be gone at any time since
> partition rescan can be gone.
>
> //user
> 1. attach filter
> // other context rescan partition
> 2. mount fs
> // user will found filter is gone.
Mmm... The fact is that at the moment the user of the filter is the
blksnap module. There are no other filter users yet. The blksnap module
solves the problem of creating snapshots, primarily for backup purposes.
Therefore, the main use case is to attach a filter for an already running
system, where all partitions are marked up, file systems are mounted.
If the server is being serviced, during which the disk is being
re-partitioned, then disabling the filter is normal. In this case, the
change tracker will be reset, and at the next backup, the filter will be
attached again.
But if I were still solving the problem of saving the filter when rescanning,
then it is necessary to take into account the UUID and name of the partition
(struct partition_meta_info). It is unacceptable that due to a change in the
structure of partitions, the filter is attached to another partition by mistake.
The changed() callback would also be good to add so that the filter receives
a notification that the block device has been updated.
But I'm not sure that this should be done, since if some code is not used in
the kernel, then it should not be in the kernel.
>
> Thanks,
> Kuai
>
>>
>> Therefore, I will refrain from making changes for now.
>>
>>>
>>> possible solutions I have in mind:
>>>
>>> 1. Store blkfilter for each partition from bdev_disk_changed() before
>>> delete_partition(), and add blkfilter back after add_partition().
>>>
>>> 2. Store blkfilter from gendisk as a xarray, and protect it by
>>> 'open_mutex' like 'part_tbl', block_device can keep the pointer to
>>> reference blkfilter so that performance from fast path is ok, and the
>>> lifetime of blkfiter can be managed separately.
>>>
>>>> There was an idea to do filtering at the disk level,
>>>> but I abandoned it.
>>>> .
>>>>
>>> I think it's better to do filtering at the partition level as well.
>>>
>>> Thanks,
>>> Kuai
>>>
>> .
>>
>
Hi,
在 2023/07/19 0:33, Sergei Shtepa 写道:
>
>
> On 7/18/23 14:32, Yu Kuai wrote:
>> Subject:
>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>> From:
>> Yu Kuai <[email protected]>
>> Date:
>> 7/18/23, 14:32
>>
>> To:
>> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>> CC:
>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>
>>
>> Hi,
>>
>> 在 2023/07/18 19:25, Sergei Shtepa 写道:
>>> Hi.
>>>
>>> On 7/18/23 03:37, Yu Kuai wrote:
>>>> Subject:
>>>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>>>> From:
>>>> Yu Kuai <[email protected]>
>>>> Date:
>>>> 7/18/23, 03:37
>>>>
>>>> To:
>>>> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>>>> CC:
>>>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>>>
>>>>
>>>> Hi,
>>>>
>>>> 在 2023/07/17 22:39, Sergei Shtepa 写道:
>>>>>
>>>>>
>>>>> On 7/11/23 04:02, Yu Kuai wrote:
>>>>>> bdev_disk_changed() is not handled, where delete_partition() and
>>>>>> add_partition() will be called, this means blkfilter for partiton will
>>>>>> be removed after partition rescan. Am I missing something?
>>>>>
>>>>> Yes, when the bdev_disk_changed() is called, all disk block devices
>>>>> are deleted and new ones are re-created. Therefore, the information
>>>>> about the attached filters will be lost. This is equivalent to
>>>>> removing the disk and adding it back.
>>>>>
>>>>> For the blksnap module, partition rescan will mean the loss of the
>>>>> change trackers data. If a snapshot was created, then such
>>>>> a partition rescan will cause the snapshot to be corrupted.
>>>>>
>>>>
>>>> I haven't review blksnap code yet, but this sounds like a problem.
>>>
>>> I can't imagine a case where this could be a problem.
>>> Partition rescan is possible only if the file system has not been
>>> mounted on any of the disk partitions. Ioctl BLKRRPART will return
>>> -EBUSY. Therefore, during normal operation of the system, rescan is
>>> not performed.
>>> And if the file systems have not been mounted, it is possible that
>>> the disk partition structure has changed or the disk in the media
>>> device has changed. In this case, it is better to detach the
>>> filter, otherwise it may lead to incorrect operation of the module.
>>>
>>> We can add prechange/postchange callback functions so that the
>>> filter can track rescan process. But at the moment, this is not
>>> necessary for the blksnap module.
>>
>> So you mean that blkfilter is only used for the case that partition
>> is mounted? (Or you mean that partition is opened)
>>
>> Then, I think you mean that filter should only be used for the partition
>> that is opended? Otherwise, filter can be gone at any time since
>> partition rescan can be gone.
>>
>> //user
>> 1. attach filter
>> // other context rescan partition
>> 2. mount fs
>> // user will found filter is gone.
>
> Mmm... The fact is that at the moment the user of the filter is the
> blksnap module. There are no other filter users yet. The blksnap module
> solves the problem of creating snapshots, primarily for backup purposes.
> Therefore, the main use case is to attach a filter for an already running
> system, where all partitions are marked up, file systems are mounted.
>
> If the server is being serviced, during which the disk is being
> re-partitioned, then disabling the filter is normal. In this case, the
> change tracker will be reset, and at the next backup, the filter will be
> attached again.
Thanks for the explanation, I was thinking that blkshap can replace
dm-snapshot.
Thanks,
Kuai
>
> But if I were still solving the problem of saving the filter when rescanning,
> then it is necessary to take into account the UUID and name of the partition
> (struct partition_meta_info). It is unacceptable that due to a change in the
> structure of partitions, the filter is attached to another partition by mistake.
> The changed() callback would also be good to add so that the filter receives
> a notification that the block device has been updated.
>
> But I'm not sure that this should be done, since if some code is not used in
> the kernel, then it should not be in the kernel.
>
>>
>> Thanks,
>> Kuai
>>
>>>
>>> Therefore, I will refrain from making changes for now.
>>>
>>>>
>>>> possible solutions I have in mind:
>>>>
>>>> 1. Store blkfilter for each partition from bdev_disk_changed() before
>>>> delete_partition(), and add blkfilter back after add_partition().
>>>>
>>>> 2. Store blkfilter from gendisk as a xarray, and protect it by
>>>> 'open_mutex' like 'part_tbl', block_device can keep the pointer to
>>>> reference blkfilter so that performance from fast path is ok, and the
>>>> lifetime of blkfiter can be managed separately.
>>>>
>>>>> There was an idea to do filtering at the disk level,
>>>>> but I abandoned it.
>>>>> .
>>>>>
>>>> I think it's better to do filtering at the partition level as well.
>>>>
>>>> Thanks,
>>>> Kuai
>>>>
>>> .
>>>
>>
> .
>
On 7/19/23 09:28, Yu Kuai wrote:
> Subject:
> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
> From:
> Yu Kuai <[email protected]>
> Date:
> 7/19/23, 09:28
>
> To:
> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
> CC:
> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>
>
> Hi,
>
> 在 2023/07/19 0:33, Sergei Shtepa 写道:
>>
>>
>> On 7/18/23 14:32, Yu Kuai wrote:
>>> Subject:
>>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>>> From:
>>> Yu Kuai <[email protected]>
>>> Date:
>>> 7/18/23, 14:32
>>>
>>> To:
>>> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>>> CC:
>>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>>
>>>
>>> Hi,
>>>
>>> 在 2023/07/18 19:25, Sergei Shtepa 写道:
>>>> Hi.
>>>>
>>>> On 7/18/23 03:37, Yu Kuai wrote:
>>>>> Subject:
>>>>> Re: [PATCH v5 02/11] block: Block Device Filtering Mechanism
>>>>> From:
>>>>> Yu Kuai <[email protected]>
>>>>> Date:
>>>>> 7/18/23, 03:37
>>>>>
>>>>> To:
>>>>> Sergei Shtepa <[email protected]>, Yu Kuai <[email protected]>, [email protected], [email protected], [email protected], [email protected]
>>>>> CC:
>>>>> [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], Donald Buczek <[email protected]>, "yukuai (C)" <[email protected]>
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> 在 2023/07/17 22:39, Sergei Shtepa 写道:
>>>>>>
>>>>>>
>>>>>> On 7/11/23 04:02, Yu Kuai wrote:
>>>>>>> bdev_disk_changed() is not handled, where delete_partition() and
>>>>>>> add_partition() will be called, this means blkfilter for partiton will
>>>>>>> be removed after partition rescan. Am I missing something?
>>>>>>
>>>>>> Yes, when the bdev_disk_changed() is called, all disk block devices
>>>>>> are deleted and new ones are re-created. Therefore, the information
>>>>>> about the attached filters will be lost. This is equivalent to
>>>>>> removing the disk and adding it back.
>>>>>>
>>>>>> For the blksnap module, partition rescan will mean the loss of the
>>>>>> change trackers data. If a snapshot was created, then such
>>>>>> a partition rescan will cause the snapshot to be corrupted.
>>>>>>
>>>>>
>>>>> I haven't review blksnap code yet, but this sounds like a problem.
>>>>
>>>> I can't imagine a case where this could be a problem.
>>>> Partition rescan is possible only if the file system has not been
>>>> mounted on any of the disk partitions. Ioctl BLKRRPART will return
>>>> -EBUSY. Therefore, during normal operation of the system, rescan is
>>>> not performed.
>>>> And if the file systems have not been mounted, it is possible that
>>>> the disk partition structure has changed or the disk in the media
>>>> device has changed. In this case, it is better to detach the
>>>> filter, otherwise it may lead to incorrect operation of the module.
>>>>
>>>> We can add prechange/postchange callback functions so that the
>>>> filter can track rescan process. But at the moment, this is not
>>>> necessary for the blksnap module.
>>>
>>> So you mean that blkfilter is only used for the case that partition
>>> is mounted? (Or you mean that partition is opened)
>>>
>>> Then, I think you mean that filter should only be used for the partition
>>> that is opended? Otherwise, filter can be gone at any time since
>>> partition rescan can be gone.
>>>
>>> //user
>>> 1. attach filter
>>> // other context rescan partition
>>> 2. mount fs
>>> // user will found filter is gone.
>>
>> Mmm... The fact is that at the moment the user of the filter is the
>> blksnap module. There are no other filter users yet. The blksnap module
>> solves the problem of creating snapshots, primarily for backup purposes.
>> Therefore, the main use case is to attach a filter for an already running
>> system, where all partitions are marked up, file systems are mounted.
>>
>> If the server is being serviced, during which the disk is being
>> re-partitioned, then disabling the filter is normal. In this case, the
>> change tracker will be reset, and at the next backup, the filter will be
>> attached again.
>
> Thanks for the explanation, I was thinking that blkshap can replace
> dm-snapshot.
Thanks!
At the moment I am creating blksnap with the Veeam product needs in mind.
I would be glad if blksnap would be useful in other products as well.
If you have any thoughts/questions/suggestions/comments, then write to me
directly. I'll be happy to discuss everything.
To work on the patch, I use the branch here
Link: https://github.com/SergeiShtepa/linux/tree/blksnap-master
The user-space libs, tools and tests, compatible with the upstream is here
Link: https://github.com/veeam/blksnap/tree/stable-v2.0
Perhaps it will be useful to you.
>
> Thanks,
> Kuai
>
>>
>> But if I were still solving the problem of saving the filter when rescanning,
>> then it is necessary to take into account the UUID and name of the partition
>> (struct partition_meta_info). It is unacceptable that due to a change in the
>> structure of partitions, the filter is attached to another partition by mistake.
>> The changed() callback would also be good to add so that the filter receives
>> a notification that the block device has been updated.
>>
>> But I'm not sure that this should be done, since if some code is not used in
>> the kernel, then it should not be in the kernel.
>>
>>>
>>> Thanks,
>>> Kuai
>>>
>>>>
>>>> Therefore, I will refrain from making changes for now.
>>>>
>>>>>
>>>>> possible solutions I have in mind:
>>>>>
>>>>> 1. Store blkfilter for each partition from bdev_disk_changed() before
>>>>> delete_partition(), and add blkfilter back after add_partition().
>>>>>
>>>>> 2. Store blkfilter from gendisk as a xarray, and protect it by
>>>>> 'open_mutex' like 'part_tbl', block_device can keep the pointer to
>>>>> reference blkfilter so that performance from fast path is ok, and the
>>>>> lifetime of blkfiter can be managed separately.
>>>>>
>>>>>> There was an idea to do filtering at the disk level,
>>>>>> but I abandoned it.
>>>>>> .
>>>>>>
>>>>> I think it's better to do filtering at the partition level as well.
>>>>>
>>>>> Thanks,
>>>>> Kuai
>>>>>
>>>> .
>>>>
>>>
>> .
>>
>
On Tue, Jul 18, 2023 at 11:53:54AM +0200, Sergei Shtepa wrote:
> > Seems a bit weird to have a signed error code that is always negative.
> > Couldn't this be an unsigned number or directly return the error from
> > the ioctl() itself?
>
> Yes, it's a good idea to pass the error code as an unsigned value.
> And this positive value can be passed in case of successful execution
> of ioctl(), but I would not like to put different error signs in one value.
Linux tends to use negative error values in basically all interfaces.
I think it will be less confusing to stick to that.
On Tue, Jul 18, 2023 at 09:37:33AM +0800, Yu Kuai wrote:
> I haven't review blksnap code yet, but this sounds like a problem.
>
> possible solutions I have in mind:
>
> 1. Store blkfilter for each partition from bdev_disk_changed() before
> delete_partition(), and add blkfilter back after add_partition().
>
> 2. Store blkfilter from gendisk as a xarray, and protect it by
> 'open_mutex' like 'part_tbl', block_device can keep the pointer to
> reference blkfilter so that performance from fast path is ok, and the
> lifetime of blkfiter can be managed separately.
The whole point of bdev_disk_changed is that the partitions might not
be the same ones as before..