2023-09-14 16:22:01

by Johannes Thumshirn

[permalink] [raw]
Subject: [PATCH v9 00/11] btrfs: introduce RAID stripe tree

Updates of the raid-stripe-tree are done at ordered extent write time to safe
on bandwidth while for reading we do the stripe-tree lookup on bio mapping
time, i.e. when the logical to physical translation happens for regular btrfs
RAID as well.

The stripe tree is keyed by an extent's disk_bytenr and disk_num_bytes and
it's contents are the respective physical device id and position.

For an example 1M write (split into 126K segments due to zone-append)
rapido2:/home/johannes/src/fstests# xfs_io -fdc "pwrite -b 1M 0 1M" -c fsync /mnt/test/test
wrote 1048576/1048576 bytes at offset 0
1 MiB, 1 ops; 0.0065 sec (151.538 MiB/sec and 151.5381 ops/sec)

The tree will look as follows (both 128k buffered writes to a ZNS drive):

RAID0 case:
bash-5.2# btrfs inspect-internal dump-tree -t raid_stripe /dev/nvme0n1
btrfs-progs v6.3
raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0)
leaf 805535744 items 1 free space 16218 generation 8 owner RAID_STRIPE_TREE
leaf 805535744 flags 0x1(WRITTEN) backref revision 1
checksum stored 2d2d2262
checksum calced 2d2d2262
fs uuid ab05cfc6-9859-404e-970d-3999b1cb5438
chunk uuid c9470ba2-49ac-4d46-8856-438a18e6bd23
item 0 key (1073741824 RAID_STRIPE_KEY 131072) itemoff 16243 itemsize 56
encoding: RAID0
stripe 0 devid 1 offset 805306368 length 131072
stripe 1 devid 2 offset 536870912 length 131072
total bytes 42949672960
bytes used 294912
uuid ab05cfc6-9859-404e-970d-3999b1cb5438

RAID1 case:
bash-5.2# btrfs inspect-internal dump-tree -t raid_stripe /dev/nvme0n1
btrfs-progs v6.3
raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0)
leaf 805535744 items 1 free space 16218 generation 8 owner RAID_STRIPE_TREE
leaf 805535744 flags 0x1(WRITTEN) backref revision 1
checksum stored 56199539
checksum calced 56199539
fs uuid 9e693a37-fbd1-4891-aed2-e7fe64605045
chunk uuid 691874fc-1b9c-469b-bd7f-05e0e6ba88c4
item 0 key (939524096 RAID_STRIPE_KEY 131072) itemoff 16243 itemsize 56
encoding: RAID1
stripe 0 devid 1 offset 939524096 length 65536
stripe 1 devid 2 offset 536870912 length 65536
total bytes 42949672960
bytes used 294912
uuid 9e693a37-fbd1-4891-aed2-e7fe64605045

A design document can be found here:
https://docs.google.com/document/d/1Iui_jMidCd4MVBNSSLXRfO7p5KmvnoQL/edit?usp=sharing&ouid=103609947580185458266&rtpof=true&sd=true

The user-space part of this series can be found here:
https://lore.kernel.org/linux-btrfs/[email protected]

Changes to v8:
- Changed tracepoints according to David's comments
- Mark on-disk structures as packed
- Got rid of __DECLARE_FLEX_ARRAY
- Rebase onto misc-next
- Split out helpers for new btrfs_load_block_group_zone_info RAID cases
- Constify declarations where possible
- Initialise variables before use
- Lower scope of variables
- Remove btrfs_stripe_root() helper
- Pick different BTRFS_RAID_STRIPE_KEY constant
- Reorder on-disk encoding types to match the raid_index
- And possibly more, please git range-diff the versions
- Link to v8: https://lore.kernel.org/r/[email protected]

Changes to v7:
- Huge rewrite

v7 of the patchset can be found here:
https://lore.kernel.org/linux-btrfs/[email protected]/

Changes to v6:
- Fix degraded RAID1 mounts
- Fix RAID0/10 mounts

v6 of the patchset can be found here:
https://lore/kernel.org/linux-btrfs/[email protected]

Changes to v5:
- Incroporated review comments from Josef and Christoph
- Rebased onto misc-next

v5 of the patchset can be found here:
https://lore/kernel.org/linux-btrfs/[email protected]

Changes to v4:
- Added patch to check for RST feature in sysfs
- Added RST lookups for scrubbing
- Fixed the error handling bug Josef pointed out
- Only check if we need to write out a RST once per delayed_ref head
- Added support for zoned data DUP with RST

Changes to v3:
- Rebased onto [email protected]
- Incorporated Josef's review
- Merged related patches

v3 of the patchset can be found here:
https://lore/kernel.org/linux-btrfs/[email protected]

Changes to v2:
- Bug fixes
- Rebased onto [email protected]
- Added tracepoints
- Added leak checker
- Added RAID0 and RAID10

v2 of the patchset can be found here:
https://lore.kernel.org/linux-btrfs/[email protected]

Changes to v1:
- Write the stripe-tree at delayed-ref time (Qu)
- Add a different write path for preallocation

v1 of the patchset can be found here:
https://lore.kernel.org/linux-btrfs/[email protected]/

Signed-off-by: Johannes Thumshirn <[email protected]>
---
Johannes Thumshirn (11):
btrfs: add raid stripe tree definitions
btrfs: read raid-stripe-tree from disk
btrfs: add support for inserting raid stripe extents
btrfs: delete stripe extent on extent deletion
btrfs: lookup physical address from stripe extent
btrfs: implement RST version of scrub
btrfs: zoned: allow zoned RAID
btrfs: add raid stripe tree pretty printer
btrfs: announce presence of raid-stripe-tree in sysfs
btrfs: add trace events for RST
btrfs: add raid-stripe-tree to features enabled with debug

fs/btrfs/Makefile | 2 +-
fs/btrfs/accessors.h | 10 +
fs/btrfs/bio.c | 25 +++
fs/btrfs/block-rsv.c | 6 +
fs/btrfs/disk-io.c | 18 ++
fs/btrfs/extent-tree.c | 7 +
fs/btrfs/fs.h | 4 +-
fs/btrfs/inode.c | 8 +-
fs/btrfs/locking.c | 1 +
fs/btrfs/ordered-data.c | 1 +
fs/btrfs/ordered-data.h | 2 +
fs/btrfs/print-tree.c | 26 +++
fs/btrfs/raid-stripe-tree.c | 449 ++++++++++++++++++++++++++++++++++++++++
fs/btrfs/raid-stripe-tree.h | 52 +++++
fs/btrfs/scrub.c | 53 +++++
fs/btrfs/sysfs.c | 3 +
fs/btrfs/volumes.c | 43 +++-
fs/btrfs/volumes.h | 16 +-
fs/btrfs/zoned.c | 144 ++++++++++++-
include/trace/events/btrfs.h | 75 +++++++
include/uapi/linux/btrfs.h | 1 +
include/uapi/linux/btrfs_tree.h | 31 +++
22 files changed, 954 insertions(+), 23 deletions(-)
---
base-commit: 1d73023d96965a5c8fb76a39aec88d840ebe5b21
change-id: 20230613-raid-stripe-tree-e330c9a45cc3

Best regards,
--
Johannes Thumshirn <[email protected]>


2023-09-14 16:22:14

by Johannes Thumshirn

[permalink] [raw]
Subject: [PATCH v9 10/11] btrfs: add trace events for RST

Add trace events for raid-stripe-tree operations.

Signed-off-by: Johannes Thumshirn <[email protected]>
---
fs/btrfs/raid-stripe-tree.c | 8 +++++
include/trace/events/btrfs.h | 75 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 83 insertions(+)

diff --git a/fs/btrfs/raid-stripe-tree.c b/fs/btrfs/raid-stripe-tree.c
index 63bf62c33436..ee4155377bf9 100644
--- a/fs/btrfs/raid-stripe-tree.c
+++ b/fs/btrfs/raid-stripe-tree.c
@@ -62,6 +62,9 @@ int btrfs_delete_raid_extent(struct btrfs_trans_handle *trans, u64 start,
if (found_end <= start)
break;

+ trace_btrfs_raid_extent_delete(fs_info, start, end,
+ found_start, found_end);
+
ASSERT(found_start >= start && found_end <= end);
ret = btrfs_del_item(trans, stripe_root, path);
if (ret)
@@ -94,6 +97,8 @@ static int btrfs_insert_one_raid_extent(struct btrfs_trans_handle *trans,
return -ENOMEM;
}

+ trace_btrfs_insert_one_raid_extent(fs_info, bioc->logical, bioc->size,
+ num_stripes);
btrfs_set_stack_stripe_extent_encoding(stripe_extent, encoding);
for (int i = 0; i < num_stripes; i++) {
u64 devid = bioc->stripes[i].dev->devid;
@@ -414,6 +419,9 @@ int btrfs_get_raid_extent_offset(struct btrfs_fs_info *fs_info,

stripe->physical = physical + offset;

+ trace_btrfs_get_raid_extent_offset(fs_info, logical, *length,
+ stripe->physical, devid);
+
ret = 0;
goto free_path;
}
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index b2db2c2f1c57..fcf246f84547 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -2497,6 +2497,81 @@ DEFINE_EVENT(btrfs_raid56_bio, raid56_write,
TP_ARGS(rbio, bio, trace_info)
);

+TRACE_EVENT(btrfs_insert_one_raid_extent,
+
+ TP_PROTO(const struct btrfs_fs_info *fs_info, u64 logical, u64 length,
+ int num_stripes),
+
+ TP_ARGS(fs_info, logical, length, num_stripes),
+
+ TP_STRUCT__entry_btrfs(
+ __field( u64, logical )
+ __field( u64, length )
+ __field( int, num_stripes )
+ ),
+
+ TP_fast_assign_btrfs(fs_info,
+ __entry->logical = logical;
+ __entry->length = length;
+ __entry->num_stripes = num_stripes;
+ ),
+
+ TP_printk_btrfs("logical=%llu length=%llu num_stripes=%d",
+ __entry->logical, __entry->length,
+ __entry->num_stripes)
+);
+
+TRACE_EVENT(btrfs_raid_extent_delete,
+
+ TP_PROTO(const struct btrfs_fs_info *fs_info, u64 start, u64 end,
+ u64 found_start, u64 found_end),
+
+ TP_ARGS(fs_info, start, end, found_start, found_end),
+
+ TP_STRUCT__entry_btrfs(
+ __field( u64, start )
+ __field( u64, end )
+ __field( u64, found_start )
+ __field( u64, found_end )
+ ),
+
+ TP_fast_assign_btrfs(fs_info,
+ __entry->start = start;
+ __entry->end = end;
+ __entry->found_start = found_start;
+ __entry->found_end = found_end;
+ ),
+
+ TP_printk_btrfs("start=%llu end=%llu found_start=%llu found_end=%llu",
+ __entry->start, __entry->end, __entry->found_start,
+ __entry->found_end)
+);
+
+TRACE_EVENT(btrfs_get_raid_extent_offset,
+
+ TP_PROTO(const struct btrfs_fs_info *fs_info, u64 logical, u64 length,
+ u64 physical, u64 devid),
+
+ TP_ARGS(fs_info, logical, length, physical, devid),
+
+ TP_STRUCT__entry_btrfs(
+ __field( u64, logical )
+ __field( u64, length )
+ __field( u64, physical )
+ __field( u64, devid )
+ ),
+
+ TP_fast_assign_btrfs(fs_info,
+ __entry->logical = logical;
+ __entry->length = length;
+ __entry->physical = physical;
+ __entry->devid = devid;
+ ),
+
+ TP_printk_btrfs("logical=%llu length=%llu physical=%llu devid=%llu",
+ __entry->logical, __entry->length, __entry->physical,
+ __entry->devid)
+);
#endif /* _TRACE_BTRFS_H */

/* This part must be outside protection */

--
2.41.0

2023-09-14 16:22:22

by Johannes Thumshirn

[permalink] [raw]
Subject: [PATCH v9 03/11] btrfs: add support for inserting raid stripe extents

Add support for inserting stripe extents into the raid stripe tree on
completion of every write that needs an extra logical-to-physical
translation when using RAID.

Inserting the stripe extents happens after the data I/O has completed,
this is done to a) support zone-append and b) rule out the possibility of
a RAID-write-hole.

Signed-off-by: Johannes Thumshirn <[email protected]>
---
fs/btrfs/Makefile | 2 +-
fs/btrfs/bio.c | 23 +++++
fs/btrfs/extent-tree.c | 1 +
fs/btrfs/inode.c | 8 +-
fs/btrfs/ordered-data.c | 1 +
fs/btrfs/ordered-data.h | 2 +
fs/btrfs/raid-stripe-tree.c | 245 ++++++++++++++++++++++++++++++++++++++++++++
fs/btrfs/raid-stripe-tree.h | 34 ++++++
fs/btrfs/volumes.c | 4 +-
fs/btrfs/volumes.h | 15 +--
10 files changed, 326 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/Makefile b/fs/btrfs/Makefile
index c57d80729d4f..525af975f61c 100644
--- a/fs/btrfs/Makefile
+++ b/fs/btrfs/Makefile
@@ -33,7 +33,7 @@ btrfs-y += super.o ctree.o extent-tree.o print-tree.o root-tree.o dir-item.o \
uuid-tree.o props.o free-space-tree.o tree-checker.o space-info.o \
block-rsv.o delalloc-space.o block-group.o discard.o reflink.o \
subpage.o tree-mod-log.o extent-io-tree.o fs.o messages.o bio.o \
- lru_cache.o
+ lru_cache.o raid-stripe-tree.o

btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o
btrfs-$(CONFIG_BTRFS_FS_REF_VERIFY) += ref-verify.o
diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
index 31ff36990404..ddbe6f8d4ea2 100644
--- a/fs/btrfs/bio.c
+++ b/fs/btrfs/bio.c
@@ -14,6 +14,7 @@
#include "rcu-string.h"
#include "zoned.h"
#include "file-item.h"
+#include "raid-stripe-tree.h"

static struct bio_set btrfs_bioset;
static struct bio_set btrfs_clone_bioset;
@@ -415,6 +416,9 @@ static void btrfs_orig_write_end_io(struct bio *bio)
else
bio->bi_status = BLK_STS_OK;

+ if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+
btrfs_orig_bbio_end_io(bbio);
btrfs_put_bioc(bioc);
}
@@ -426,6 +430,8 @@ static void btrfs_clone_write_end_io(struct bio *bio)
if (bio->bi_status) {
atomic_inc(&stripe->bioc->error);
btrfs_log_dev_io_error(bio, stripe->dev);
+ } else if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
}

/* Pass on control to the original bio this one was cloned from */
@@ -487,6 +493,7 @@ static void btrfs_submit_mirrored_bio(struct btrfs_io_context *bioc, int dev_nr)
bio->bi_private = &bioc->stripes[dev_nr];
bio->bi_iter.bi_sector = bioc->stripes[dev_nr].physical >> SECTOR_SHIFT;
bioc->stripes[dev_nr].bioc = bioc;
+ bioc->size = bio->bi_iter.bi_size;
btrfs_submit_dev_bio(bioc->stripes[dev_nr].dev, bio);
}

@@ -496,6 +503,8 @@ static void __btrfs_submit_bio(struct bio *bio, struct btrfs_io_context *bioc,
if (!bioc) {
/* Single mirror read/write fast path. */
btrfs_bio(bio)->mirror_num = mirror_num;
+ if (bio_op(bio) != REQ_OP_READ)
+ btrfs_bio(bio)->orig_physical = smap->physical;
bio->bi_iter.bi_sector = smap->physical >> SECTOR_SHIFT;
if (bio_op(bio) != REQ_OP_READ)
btrfs_bio(bio)->orig_physical = smap->physical;
@@ -688,6 +697,20 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
bio->bi_opf |= REQ_OP_ZONE_APPEND;
}

+ if (is_data_bbio(bbio) && bioc &&
+ btrfs_need_stripe_tree_update(bioc->fs_info,
+ bioc->map_type)) {
+ /*
+ * No locking for the list update, as we only add to
+ * the list in the I/O submission path, and list
+ * iteration only happens in the completion path,
+ * which can't happen until after the last submission.
+ */
+ btrfs_get_bioc(bioc);
+ list_add_tail(&bioc->ordered_entry,
+ &bbio->ordered->bioc_list);
+ }
+
/*
* Csum items for reloc roots have already been cloned at this
* point, so they are handled as part of the no-checksum case.
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index cb12bfb047e7..959d7449ea0d 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -42,6 +42,7 @@
#include "file-item.h"
#include "orphan.h"
#include "tree-checker.h"
+#include "raid-stripe-tree.h"

#undef SCRAMBLE_DELAYED_REFS

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index e02a5ba5b533..b5e0ed3a36f7 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -71,6 +71,7 @@
#include "super.h"
#include "orphan.h"
#include "backref.h"
+#include "raid-stripe-tree.h"

struct btrfs_iget_args {
u64 ino;
@@ -3091,6 +3092,10 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)

trans->block_rsv = &inode->block_rsv;

+ ret = btrfs_insert_raid_extent(trans, ordered_extent);
+ if (ret)
+ goto out;
+
if (test_bit(BTRFS_ORDERED_COMPRESSED, &ordered_extent->flags))
compress_type = ordered_extent->compress_type;
if (test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags)) {
@@ -3224,7 +3229,8 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered)
{
if (btrfs_is_zoned(btrfs_sb(ordered->inode->i_sb)) &&
- !test_bit(BTRFS_ORDERED_IOERR, &ordered->flags))
+ !test_bit(BTRFS_ORDERED_IOERR, &ordered->flags) &&
+ list_empty(&ordered->bioc_list))
btrfs_finish_ordered_zoned(ordered);
return btrfs_finish_one_ordered(ordered);
}
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 345c449d588c..55c7d5543265 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -191,6 +191,7 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
INIT_LIST_HEAD(&entry->log_list);
INIT_LIST_HEAD(&entry->root_extent_list);
INIT_LIST_HEAD(&entry->work_list);
+ INIT_LIST_HEAD(&entry->bioc_list);
init_completion(&entry->completion);

/*
diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
index 173bd5c5df26..1c51ac57e5df 100644
--- a/fs/btrfs/ordered-data.h
+++ b/fs/btrfs/ordered-data.h
@@ -151,6 +151,8 @@ struct btrfs_ordered_extent {
struct completion completion;
struct btrfs_work flush_work;
struct list_head work_list;
+
+ struct list_head bioc_list;
};

static inline void
diff --git a/fs/btrfs/raid-stripe-tree.c b/fs/btrfs/raid-stripe-tree.c
new file mode 100644
index 000000000000..7cdcc45a8796
--- /dev/null
+++ b/fs/btrfs/raid-stripe-tree.c
@@ -0,0 +1,245 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2023 Western Digital Corporation or its affiliates.
+ */
+
+#include <linux/btrfs_tree.h>
+
+#include "ctree.h"
+#include "fs.h"
+#include "accessors.h"
+#include "transaction.h"
+#include "disk-io.h"
+#include "raid-stripe-tree.h"
+#include "volumes.h"
+#include "misc.h"
+#include "print-tree.h"
+
+static int btrfs_insert_one_raid_extent(struct btrfs_trans_handle *trans,
+ int num_stripes,
+ struct btrfs_io_context *bioc)
+{
+ struct btrfs_fs_info *fs_info = trans->fs_info;
+ struct btrfs_key stripe_key;
+ struct btrfs_root *stripe_root = fs_info->stripe_root;
+ u8 encoding = btrfs_bg_flags_to_raid_index(bioc->map_type);
+ struct btrfs_stripe_extent *stripe_extent;
+ const size_t item_size = struct_size(stripe_extent, strides, num_stripes);
+ int ret;
+
+ stripe_extent = kzalloc(item_size, GFP_NOFS);
+ if (!stripe_extent) {
+ btrfs_abort_transaction(trans, -ENOMEM);
+ btrfs_end_transaction(trans);
+ return -ENOMEM;
+ }
+
+ btrfs_set_stack_stripe_extent_encoding(stripe_extent, encoding);
+ for (int i = 0; i < num_stripes; i++) {
+ u64 devid = bioc->stripes[i].dev->devid;
+ u64 physical = bioc->stripes[i].physical;
+ u64 length = bioc->stripes[i].length;
+ struct btrfs_raid_stride *raid_stride =
+ &stripe_extent->strides[i];
+
+ if (length == 0)
+ length = bioc->size;
+
+ btrfs_set_stack_raid_stride_devid(raid_stride, devid);
+ btrfs_set_stack_raid_stride_physical(raid_stride, physical);
+ btrfs_set_stack_raid_stride_length(raid_stride, length);
+ }
+
+ stripe_key.objectid = bioc->logical;
+ stripe_key.type = BTRFS_RAID_STRIPE_KEY;
+ stripe_key.offset = bioc->size;
+
+ ret = btrfs_insert_item(trans, stripe_root, &stripe_key, stripe_extent,
+ item_size);
+ if (ret)
+ btrfs_abort_transaction(trans, ret);
+
+ kfree(stripe_extent);
+
+ return ret;
+}
+
+static int btrfs_insert_mirrored_raid_extents(struct btrfs_trans_handle *trans,
+ struct btrfs_ordered_extent *ordered,
+ u64 map_type)
+{
+ int num_stripes = btrfs_bg_type_to_factor(map_type);
+ struct btrfs_io_context *bioc;
+ int ret;
+
+ list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
+ ret = btrfs_insert_one_raid_extent(trans, num_stripes, bioc);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int btrfs_insert_striped_mirrored_raid_extents(
+ struct btrfs_trans_handle *trans,
+ struct btrfs_ordered_extent *ordered,
+ u64 map_type)
+{
+ struct btrfs_io_context *bioc;
+ struct btrfs_io_context *rbioc;
+ const int nstripes = list_count_nodes(&ordered->bioc_list);
+ const int index = btrfs_bg_flags_to_raid_index(map_type);
+ const int substripes = btrfs_raid_array[index].sub_stripes;
+ const int max_stripes =
+ trans->fs_info->fs_devices->rw_devices / substripes;
+ int left = nstripes;
+ int i;
+ int ret = 0;
+ u64 stripe_end;
+ u64 prev_end;
+
+ if (nstripes == 1)
+ return btrfs_insert_mirrored_raid_extents(trans, ordered, map_type);
+
+ rbioc = kzalloc(struct_size(rbioc, stripes, nstripes * substripes),
+ GFP_NOFS);
+ if (!rbioc)
+ return -ENOMEM;
+
+ rbioc->map_type = map_type;
+ rbioc->logical = list_first_entry(&ordered->bioc_list, typeof(*rbioc),
+ ordered_entry)->logical;
+
+ stripe_end = rbioc->logical;
+ prev_end = stripe_end;
+ i = 0;
+ list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
+
+ rbioc->size += bioc->size;
+ for (int j = 0; j < substripes; j++) {
+ int stripe = i + j;
+ rbioc->stripes[stripe].dev = bioc->stripes[j].dev;
+ rbioc->stripes[stripe].physical = bioc->stripes[j].physical;
+ rbioc->stripes[stripe].length = bioc->size;
+ }
+
+ stripe_end += rbioc->size;
+ if (i >= nstripes ||
+ (stripe_end - prev_end >= max_stripes * BTRFS_STRIPE_LEN)) {
+ ret = btrfs_insert_one_raid_extent(trans,
+ nstripes * substripes,
+ rbioc);
+ if (ret)
+ goto out;
+
+ left -= nstripes;
+ i = 0;
+ rbioc->logical += rbioc->size;
+ rbioc->size = 0;
+ } else {
+ i += substripes;
+ prev_end = stripe_end;
+ }
+ }
+
+ if (left) {
+ bioc = list_prev_entry(bioc, ordered_entry);
+ ret = btrfs_insert_one_raid_extent(trans, substripes, bioc);
+ }
+
+out:
+ kfree(rbioc);
+ return ret;
+}
+
+static int btrfs_insert_striped_raid_extents(struct btrfs_trans_handle *trans,
+ struct btrfs_ordered_extent *ordered,
+ u64 map_type)
+{
+ struct btrfs_io_context *bioc;
+ struct btrfs_io_context *rbioc;
+ const int nstripes = list_count_nodes(&ordered->bioc_list);
+ int i;
+ int ret = 0;
+
+ rbioc = kzalloc(struct_size(rbioc, stripes, nstripes), GFP_NOFS);
+ if (!rbioc)
+ return -ENOMEM;
+ rbioc->map_type = map_type;
+ rbioc->logical = list_first_entry(&ordered->bioc_list, typeof(*rbioc),
+ ordered_entry)->logical;
+
+ i = 0;
+ list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
+ rbioc->size += bioc->size;
+ rbioc->stripes[i].dev = bioc->stripes[0].dev;
+ rbioc->stripes[i].physical = bioc->stripes[0].physical;
+ rbioc->stripes[i].length = bioc->size;
+
+ if (i == nstripes - 1) {
+ ret = btrfs_insert_one_raid_extent(trans, nstripes, rbioc);
+ if (ret)
+ goto out;
+
+ i = 0;
+ rbioc->logical += rbioc->size;
+ rbioc->size = 0;
+ } else {
+ i++;
+ }
+ }
+
+ if (i && i < nstripes - 1)
+ ret = btrfs_insert_one_raid_extent(trans, i, rbioc);
+
+out:
+ kfree(rbioc);
+ return ret;
+}
+
+int btrfs_insert_raid_extent(struct btrfs_trans_handle *trans,
+ struct btrfs_ordered_extent *ordered_extent)
+{
+ struct btrfs_io_context *bioc;
+ u64 map_type;
+ int ret;
+
+ if (!trans->fs_info->stripe_root)
+ return 0;
+
+ map_type = list_first_entry(&ordered_extent->bioc_list, typeof(*bioc),
+ ordered_entry)->map_type;
+
+ switch (map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
+ case BTRFS_BLOCK_GROUP_DUP:
+ case BTRFS_BLOCK_GROUP_RAID1:
+ case BTRFS_BLOCK_GROUP_RAID1C3:
+ case BTRFS_BLOCK_GROUP_RAID1C4:
+ ret = btrfs_insert_mirrored_raid_extents(trans, ordered_extent,
+ map_type);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID0:
+ ret = btrfs_insert_striped_raid_extents(trans, ordered_extent,
+ map_type);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID10:
+ ret = btrfs_insert_striped_mirrored_raid_extents(trans, ordered_extent, map_type);
+ break;
+ default:
+ btrfs_err(trans->fs_info, "unknown block-group profile %lld",
+ map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
+ ASSERT(0);
+ ret = -EINVAL;
+ break;
+ }
+
+ while (!list_empty(&ordered_extent->bioc_list)) {
+ bioc = list_first_entry(&ordered_extent->bioc_list,
+ typeof(*bioc), ordered_entry);
+ list_del(&bioc->ordered_entry);
+ btrfs_put_bioc(bioc);
+ }
+
+ return ret;
+}
diff --git a/fs/btrfs/raid-stripe-tree.h b/fs/btrfs/raid-stripe-tree.h
new file mode 100644
index 000000000000..884f0e99d5e8
--- /dev/null
+++ b/fs/btrfs/raid-stripe-tree.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 Western Digital Corporation or its affiliates.
+ */
+
+#ifndef BTRFS_RAID_STRIPE_TREE_H
+#define BTRFS_RAID_STRIPE_TREE_H
+
+struct btrfs_io_context;
+struct btrfs_io_stripe;
+struct btrfs_ordered_extent;
+struct btrfs_trans_handle;
+
+int btrfs_insert_raid_extent(struct btrfs_trans_handle *trans,
+ struct btrfs_ordered_extent *ordered_extent);
+
+static inline bool btrfs_need_stripe_tree_update(struct btrfs_fs_info *fs_info,
+ u64 map_type)
+{
+ u64 type = map_type & BTRFS_BLOCK_GROUP_TYPE_MASK;
+ u64 profile = map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
+
+ if (!fs_info->stripe_root)
+ return false;
+
+ if (type != BTRFS_BLOCK_GROUP_DATA)
+ return false;
+
+ if (profile & BTRFS_BLOCK_GROUP_RAID1_MASK)
+ return true;
+
+ return false;
+}
+#endif
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a1eae8b5b412..c2bac87912c7 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5984,6 +5984,7 @@ static int find_live_mirror(struct btrfs_fs_info *fs_info,
}

static struct btrfs_io_context *alloc_btrfs_io_context(struct btrfs_fs_info *fs_info,
+ u64 logical,
u16 total_stripes)
{
struct btrfs_io_context *bioc;
@@ -6003,6 +6004,7 @@ static struct btrfs_io_context *alloc_btrfs_io_context(struct btrfs_fs_info *fs_
bioc->fs_info = fs_info;
bioc->replace_stripe_src = -1;
bioc->full_stripe_logical = (u64)-1;
+ bioc->logical = logical;

return bioc;
}
@@ -6537,7 +6539,7 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
goto out;
}

- bioc = alloc_btrfs_io_context(fs_info, num_alloc_stripes);
+ bioc = alloc_btrfs_io_context(fs_info, logical, num_alloc_stripes);
if (!bioc) {
ret = -ENOMEM;
goto out;
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 26397adc8706..2043aff6e966 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -390,12 +390,11 @@ struct btrfs_fs_devices {

struct btrfs_io_stripe {
struct btrfs_device *dev;
- union {
- /* Block mapping */
- u64 physical;
- /* For the endio handler */
- struct btrfs_io_context *bioc;
- };
+ /* Block mapping */
+ u64 physical;
+ u64 length;
+ /* For the endio handler */
+ struct btrfs_io_context *bioc;
};

struct btrfs_discard_stripe {
@@ -428,6 +427,10 @@ struct btrfs_io_context {
atomic_t error;
u16 max_errors;

+ u64 logical;
+ u64 size;
+ struct list_head ordered_entry;
+
/*
* The total number of stripes, including the extra duplicated
* stripe for replace.

--
2.41.0

2023-09-14 16:22:30

by Johannes Thumshirn

[permalink] [raw]
Subject: [PATCH v9 07/11] btrfs: zoned: allow zoned RAID

When we have a raid-stripe-tree, we can do RAID0/1/10 on zoned devices for
data block-groups. For meta-data block-groups, we don't actually need
anything special, as all meta-data I/O is protected by the
btrfs_zoned_meta_io_lock() already.

Signed-off-by: Johannes Thumshirn <[email protected]>
---
fs/btrfs/raid-stripe-tree.h | 7 ++-
fs/btrfs/volumes.c | 2 +
fs/btrfs/zoned.c | 144 ++++++++++++++++++++++++++++++++++++++++++--
3 files changed, 148 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/raid-stripe-tree.h b/fs/btrfs/raid-stripe-tree.h
index 5d9629a815c1..f31292ab9030 100644
--- a/fs/btrfs/raid-stripe-tree.h
+++ b/fs/btrfs/raid-stripe-tree.h
@@ -6,6 +6,11 @@
#ifndef BTRFS_RAID_STRIPE_TREE_H
#define BTRFS_RAID_STRIPE_TREE_H

+#define BTRFS_RST_SUPP_BLOCK_GROUP_MASK (BTRFS_BLOCK_GROUP_DUP |\
+ BTRFS_BLOCK_GROUP_RAID1_MASK |\
+ BTRFS_BLOCK_GROUP_RAID0 |\
+ BTRFS_BLOCK_GROUP_RAID10)
+
struct btrfs_io_context;
struct btrfs_io_stripe;
struct btrfs_ordered_extent;
@@ -32,7 +37,7 @@ static inline bool btrfs_need_stripe_tree_update(struct btrfs_fs_info *fs_info,
if (type != BTRFS_BLOCK_GROUP_DATA)
return false;

- if (profile & BTRFS_BLOCK_GROUP_RAID1_MASK)
+ if (profile & BTRFS_RST_SUPP_BLOCK_GROUP_MASK)
return true;

return false;
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 2326dbcf85f6..dc311e38eb11 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -6541,6 +6541,8 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
* I/O context structure.
*/
if (smap && num_alloc_stripes == 1 &&
+ !(btrfs_need_stripe_tree_update(fs_info, map->type) &&
+ op != BTRFS_MAP_READ) &&
!((map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) && mirror_num > 1)) {
ret = set_io_stripe(fs_info, op, logical, length, smap, map,
stripe_index, stripe_offset, stripe_nr);
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index d05510cb2cb2..ce2846c944d2 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1397,9 +1397,11 @@ static int btrfs_load_block_group_dup(struct btrfs_block_group *bg,
struct zone_info *zone_info,
unsigned long *active)
{
- if (map->type & BTRFS_BLOCK_GROUP_DATA) {
- btrfs_err(bg->fs_info,
- "zoned: profile DUP not yet supported on data bg");
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+
+ if (map->type & BTRFS_BLOCK_GROUP_DATA &&
+ !fs_info->stripe_root) {
+ btrfs_err(fs_info, "zoned: data DUP profile needs stripe_root");
return -EINVAL;
}

@@ -1433,6 +1435,133 @@ static int btrfs_load_block_group_dup(struct btrfs_block_group *bg,
return 0;
}

+static int btrfs_load_block_group_raid1(struct btrfs_block_group *bg,
+ struct map_lookup *map,
+ struct zone_info *zone_info,
+ unsigned long *active)
+{
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+ int i;
+
+ if (map->type & BTRFS_BLOCK_GROUP_DATA &&
+ !fs_info->stripe_root) {
+ btrfs_err(fs_info, "zoned: data %s needs stripe_root",
+ btrfs_bg_type_to_raid_name(map->type));
+ return -EINVAL;
+
+ }
+
+ for (i = 0; i < map->num_stripes; i++) {
+ if (zone_info[i].alloc_offset == WP_MISSING_DEV ||
+ zone_info[i].alloc_offset == WP_CONVENTIONAL)
+ continue;
+
+ if ((zone_info[0].alloc_offset != zone_info[i].alloc_offset) &&
+ !btrfs_test_opt(fs_info, DEGRADED)) {
+ btrfs_err(fs_info,
+ "zoned: write pointer offset mismatch of zones in %s profile",
+ btrfs_bg_type_to_raid_name(map->type));
+ return -EIO;
+ }
+ if (test_bit(0, active) != test_bit(i, active)) {
+ if (!btrfs_test_opt(fs_info, DEGRADED) &&
+ !btrfs_zone_activate(bg)) {
+ return -EIO;
+ }
+ } else {
+ if (test_bit(0, active))
+ set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+ &bg->runtime_flags);
+ }
+ /*
+ * In case a device is missing we have a cap of 0, so don't
+ * use it.
+ */
+ bg->zone_capacity = min_not_zero(zone_info[0].capacity,
+ zone_info[1].capacity);
+ }
+
+ if (zone_info[0].alloc_offset != WP_MISSING_DEV)
+ bg->alloc_offset = zone_info[0].alloc_offset;
+ else
+ bg->alloc_offset = zone_info[i - 1].alloc_offset;
+
+ return 0;
+}
+
+static int btrfs_load_block_group_raid0(struct btrfs_block_group *bg,
+ struct map_lookup *map,
+ struct zone_info *zone_info,
+ unsigned long *active)
+{
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+
+ if (map->type & BTRFS_BLOCK_GROUP_DATA &&
+ !fs_info->stripe_root) {
+ btrfs_err(fs_info, "zoned: data %s needs stripe_root",
+ btrfs_bg_type_to_raid_name(map->type));
+ return -EINVAL;
+
+ }
+
+ for (int i = 0; i < map->num_stripes; i++) {
+ if (zone_info[i].alloc_offset == WP_MISSING_DEV ||
+ zone_info[i].alloc_offset == WP_CONVENTIONAL)
+ continue;
+
+ if (test_bit(0, active) != test_bit(i, active)) {
+ if (!btrfs_zone_activate(bg))
+ return -EIO;
+ } else {
+ if (test_bit(0, active))
+ set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+ &bg->runtime_flags);
+ }
+ bg->zone_capacity += zone_info[i].capacity;
+ bg->alloc_offset += zone_info[i].alloc_offset;
+ }
+
+ return 0;
+}
+
+static int btrfs_load_block_group_raid10(struct btrfs_block_group *bg,
+ struct map_lookup *map,
+ struct zone_info *zone_info,
+ unsigned long *active)
+{
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+
+ if (map->type & BTRFS_BLOCK_GROUP_DATA &&
+ !fs_info->stripe_root) {
+ btrfs_err(fs_info, "zoned: data %s needs stripe_root",
+ btrfs_bg_type_to_raid_name(map->type));
+ return -EINVAL;
+
+ }
+
+ for (int i = 0; i < map->num_stripes; i++) {
+ if (zone_info[i].alloc_offset == WP_MISSING_DEV ||
+ zone_info[i].alloc_offset == WP_CONVENTIONAL)
+ continue;
+
+ if (test_bit(0, active) != test_bit(i, active)) {
+ if (!btrfs_zone_activate(bg))
+ return -EIO;
+ } else {
+ if (test_bit(0, active))
+ set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+ &bg->runtime_flags);
+ }
+
+ if ((i % map->sub_stripes) == 0) {
+ bg->zone_capacity += zone_info[i].capacity;
+ bg->alloc_offset += zone_info[i].alloc_offset;
+ }
+ }
+
+ return 0;
+}
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
@@ -1525,11 +1654,18 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
ret = btrfs_load_block_group_dup(cache, map, zone_info, active);
break;
case BTRFS_BLOCK_GROUP_RAID1:
+ case BTRFS_BLOCK_GROUP_RAID1C3:
+ case BTRFS_BLOCK_GROUP_RAID1C4:
+ ret = btrfs_load_block_group_raid1(cache, map, zone_info, active);
+ break;
case BTRFS_BLOCK_GROUP_RAID0:
+ ret = btrfs_load_block_group_raid0(cache, map, zone_info, active);
+ break;
case BTRFS_BLOCK_GROUP_RAID10:
+ ret = btrfs_load_block_group_raid10(cache, map, zone_info, active);
+ break;
case BTRFS_BLOCK_GROUP_RAID5:
case BTRFS_BLOCK_GROUP_RAID6:
- /* non-single profiles are not supported yet */
default:
btrfs_err(fs_info, "zoned: profile %s not yet supported",
btrfs_bg_type_to_raid_name(map->type));

--
2.41.0

2023-09-15 01:16:57

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v9 03/11] btrfs: add support for inserting raid stripe extents

On Thu, Sep 14, 2023 at 09:06:58AM -0700, Johannes Thumshirn wrote:
> Add support for inserting stripe extents into the raid stripe tree on
> completion of every write that needs an extra logical-to-physical
> translation when using RAID.
>
> Inserting the stripe extents happens after the data I/O has completed,
> this is done to a) support zone-append and b) rule out the possibility of
> a RAID-write-hole.
>
> Signed-off-by: Johannes Thumshirn <[email protected]>
> ---
> fs/btrfs/Makefile | 2 +-
> fs/btrfs/bio.c | 23 +++++
> fs/btrfs/extent-tree.c | 1 +
> fs/btrfs/inode.c | 8 +-
> fs/btrfs/ordered-data.c | 1 +
> fs/btrfs/ordered-data.h | 2 +
> fs/btrfs/raid-stripe-tree.c | 245 ++++++++++++++++++++++++++++++++++++++++++++
> fs/btrfs/raid-stripe-tree.h | 34 ++++++
> fs/btrfs/volumes.c | 4 +-
> fs/btrfs/volumes.h | 15 +--
> 10 files changed, 326 insertions(+), 9 deletions(-)
>
> diff --git a/fs/btrfs/Makefile b/fs/btrfs/Makefile
> index c57d80729d4f..525af975f61c 100644
> --- a/fs/btrfs/Makefile
> +++ b/fs/btrfs/Makefile
> @@ -33,7 +33,7 @@ btrfs-y += super.o ctree.o extent-tree.o print-tree.o root-tree.o dir-item.o \
> uuid-tree.o props.o free-space-tree.o tree-checker.o space-info.o \
> block-rsv.o delalloc-space.o block-group.o discard.o reflink.o \
> subpage.o tree-mod-log.o extent-io-tree.o fs.o messages.o bio.o \
> - lru_cache.o
> + lru_cache.o raid-stripe-tree.o
>
> btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o
> btrfs-$(CONFIG_BTRFS_FS_REF_VERIFY) += ref-verify.o
> diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
> index 31ff36990404..ddbe6f8d4ea2 100644
> --- a/fs/btrfs/bio.c
> +++ b/fs/btrfs/bio.c
> @@ -14,6 +14,7 @@
> #include "rcu-string.h"
> #include "zoned.h"
> #include "file-item.h"
> +#include "raid-stripe-tree.h"
>
> static struct bio_set btrfs_bioset;
> static struct bio_set btrfs_clone_bioset;
> @@ -415,6 +416,9 @@ static void btrfs_orig_write_end_io(struct bio *bio)
> else
> bio->bi_status = BLK_STS_OK;
>
> + if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
> + stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
> +
> btrfs_orig_bbio_end_io(bbio);
> btrfs_put_bioc(bioc);
> }
> @@ -426,6 +430,8 @@ static void btrfs_clone_write_end_io(struct bio *bio)
> if (bio->bi_status) {
> atomic_inc(&stripe->bioc->error);
> btrfs_log_dev_io_error(bio, stripe->dev);
> + } else if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
> + stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
> }
>
> /* Pass on control to the original bio this one was cloned from */
> @@ -487,6 +493,7 @@ static void btrfs_submit_mirrored_bio(struct btrfs_io_context *bioc, int dev_nr)
> bio->bi_private = &bioc->stripes[dev_nr];
> bio->bi_iter.bi_sector = bioc->stripes[dev_nr].physical >> SECTOR_SHIFT;
> bioc->stripes[dev_nr].bioc = bioc;
> + bioc->size = bio->bi_iter.bi_size;
> btrfs_submit_dev_bio(bioc->stripes[dev_nr].dev, bio);
> }
>
> @@ -496,6 +503,8 @@ static void __btrfs_submit_bio(struct bio *bio, struct btrfs_io_context *bioc,
> if (!bioc) {
> /* Single mirror read/write fast path. */
> btrfs_bio(bio)->mirror_num = mirror_num;
> + if (bio_op(bio) != REQ_OP_READ)
> + btrfs_bio(bio)->orig_physical = smap->physical;
> bio->bi_iter.bi_sector = smap->physical >> SECTOR_SHIFT;
> if (bio_op(bio) != REQ_OP_READ)
> btrfs_bio(bio)->orig_physical = smap->physical;
> @@ -688,6 +697,20 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
> bio->bi_opf |= REQ_OP_ZONE_APPEND;
> }
>
> + if (is_data_bbio(bbio) && bioc &&
> + btrfs_need_stripe_tree_update(bioc->fs_info,
> + bioc->map_type)) {
> + /*
> + * No locking for the list update, as we only add to
> + * the list in the I/O submission path, and list
> + * iteration only happens in the completion path,
> + * which can't happen until after the last submission.
> + */
> + btrfs_get_bioc(bioc);
> + list_add_tail(&bioc->ordered_entry,
> + &bbio->ordered->bioc_list);
> + }
> +
> /*
> * Csum items for reloc roots have already been cloned at this
> * point, so they are handled as part of the no-checksum case.
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index cb12bfb047e7..959d7449ea0d 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -42,6 +42,7 @@
> #include "file-item.h"
> #include "orphan.h"
> #include "tree-checker.h"
> +#include "raid-stripe-tree.h"
>
> #undef SCRAMBLE_DELAYED_REFS
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index e02a5ba5b533..b5e0ed3a36f7 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -71,6 +71,7 @@
> #include "super.h"
> #include "orphan.h"
> #include "backref.h"
> +#include "raid-stripe-tree.h"
>
> struct btrfs_iget_args {
> u64 ino;
> @@ -3091,6 +3092,10 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
>
> trans->block_rsv = &inode->block_rsv;
>
> + ret = btrfs_insert_raid_extent(trans, ordered_extent);
> + if (ret)
> + goto out;
> +
> if (test_bit(BTRFS_ORDERED_COMPRESSED, &ordered_extent->flags))
> compress_type = ordered_extent->compress_type;
> if (test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags)) {
> @@ -3224,7 +3229,8 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
> int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered)
> {
> if (btrfs_is_zoned(btrfs_sb(ordered->inode->i_sb)) &&
> - !test_bit(BTRFS_ORDERED_IOERR, &ordered->flags))
> + !test_bit(BTRFS_ORDERED_IOERR, &ordered->flags) &&
> + list_empty(&ordered->bioc_list))
> btrfs_finish_ordered_zoned(ordered);
> return btrfs_finish_one_ordered(ordered);
> }
> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
> index 345c449d588c..55c7d5543265 100644
> --- a/fs/btrfs/ordered-data.c
> +++ b/fs/btrfs/ordered-data.c
> @@ -191,6 +191,7 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
> INIT_LIST_HEAD(&entry->log_list);
> INIT_LIST_HEAD(&entry->root_extent_list);
> INIT_LIST_HEAD(&entry->work_list);
> + INIT_LIST_HEAD(&entry->bioc_list);
> init_completion(&entry->completion);
>
> /*
> diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
> index 173bd5c5df26..1c51ac57e5df 100644
> --- a/fs/btrfs/ordered-data.h
> +++ b/fs/btrfs/ordered-data.h
> @@ -151,6 +151,8 @@ struct btrfs_ordered_extent {
> struct completion completion;
> struct btrfs_work flush_work;
> struct list_head work_list;
> +
> + struct list_head bioc_list;
> };
>
> static inline void
> diff --git a/fs/btrfs/raid-stripe-tree.c b/fs/btrfs/raid-stripe-tree.c
> new file mode 100644
> index 000000000000..7cdcc45a8796
> --- /dev/null
> +++ b/fs/btrfs/raid-stripe-tree.c
> @@ -0,0 +1,245 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2023 Western Digital Corporation or its affiliates.
> + */
> +
> +#include <linux/btrfs_tree.h>
> +
> +#include "ctree.h"
> +#include "fs.h"
> +#include "accessors.h"
> +#include "transaction.h"
> +#include "disk-io.h"
> +#include "raid-stripe-tree.h"
> +#include "volumes.h"
> +#include "misc.h"
> +#include "print-tree.h"
> +
> +static int btrfs_insert_one_raid_extent(struct btrfs_trans_handle *trans,
> + int num_stripes,
> + struct btrfs_io_context *bioc)
> +{
> + struct btrfs_fs_info *fs_info = trans->fs_info;
> + struct btrfs_key stripe_key;
> + struct btrfs_root *stripe_root = fs_info->stripe_root;
> + u8 encoding = btrfs_bg_flags_to_raid_index(bioc->map_type);
> + struct btrfs_stripe_extent *stripe_extent;
> + const size_t item_size = struct_size(stripe_extent, strides, num_stripes);
> + int ret;
> +
> + stripe_extent = kzalloc(item_size, GFP_NOFS);
> + if (!stripe_extent) {
> + btrfs_abort_transaction(trans, -ENOMEM);
> + btrfs_end_transaction(trans);
> + return -ENOMEM;
> + }
> +
> + btrfs_set_stack_stripe_extent_encoding(stripe_extent, encoding);
> + for (int i = 0; i < num_stripes; i++) {
> + u64 devid = bioc->stripes[i].dev->devid;
> + u64 physical = bioc->stripes[i].physical;
> + u64 length = bioc->stripes[i].length;
> + struct btrfs_raid_stride *raid_stride =
> + &stripe_extent->strides[i];
> +
> + if (length == 0)
> + length = bioc->size;
> +
> + btrfs_set_stack_raid_stride_devid(raid_stride, devid);
> + btrfs_set_stack_raid_stride_physical(raid_stride, physical);
> + btrfs_set_stack_raid_stride_length(raid_stride, length);
> + }
> +
> + stripe_key.objectid = bioc->logical;
> + stripe_key.type = BTRFS_RAID_STRIPE_KEY;
> + stripe_key.offset = bioc->size;
> +
> + ret = btrfs_insert_item(trans, stripe_root, &stripe_key, stripe_extent,
> + item_size);
> + if (ret)
> + btrfs_abort_transaction(trans, ret);
> +
> + kfree(stripe_extent);
> +
> + return ret;
> +}
> +
> +static int btrfs_insert_mirrored_raid_extents(struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered,
> + u64 map_type)
> +{
> + int num_stripes = btrfs_bg_type_to_factor(map_type);
> + struct btrfs_io_context *bioc;
> + int ret;
> +
> + list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
> + ret = btrfs_insert_one_raid_extent(trans, num_stripes, bioc);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int btrfs_insert_striped_mirrored_raid_extents(
> + struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered,
> + u64 map_type)
> +{
> + struct btrfs_io_context *bioc;
> + struct btrfs_io_context *rbioc;
> + const int nstripes = list_count_nodes(&ordered->bioc_list);
> + const int index = btrfs_bg_flags_to_raid_index(map_type);
> + const int substripes = btrfs_raid_array[index].sub_stripes;
> + const int max_stripes =
> + trans->fs_info->fs_devices->rw_devices / substripes;

This will probably warn due to u64/u32 division.

2023-09-15 04:00:15

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v9 00/11] btrfs: introduce RAID stripe tree

On Thu, Sep 14, 2023 at 09:06:55AM -0700, Johannes Thumshirn wrote:
> Updates of the raid-stripe-tree are done at ordered extent write time to safe
> on bandwidth while for reading we do the stripe-tree lookup on bio mapping
> time, i.e. when the logical to physical translation happens for regular btrfs
> RAID as well.
>
> The stripe tree is keyed by an extent's disk_bytenr and disk_num_bytes and
> it's contents are the respective physical device id and position.
>
> For an example 1M write (split into 126K segments due to zone-append)
> rapido2:/home/johannes/src/fstests# xfs_io -fdc "pwrite -b 1M 0 1M" -c fsync /mnt/test/test
> wrote 1048576/1048576 bytes at offset 0
> 1 MiB, 1 ops; 0.0065 sec (151.538 MiB/sec and 151.5381 ops/sec)
>
> The tree will look as follows (both 128k buffered writes to a ZNS drive):
>
> RAID0 case:
> bash-5.2# btrfs inspect-internal dump-tree -t raid_stripe /dev/nvme0n1
> btrfs-progs v6.3
> raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0)
> leaf 805535744 items 1 free space 16218 generation 8 owner RAID_STRIPE_TREE
> leaf 805535744 flags 0x1(WRITTEN) backref revision 1
> checksum stored 2d2d2262
> checksum calced 2d2d2262
> fs uuid ab05cfc6-9859-404e-970d-3999b1cb5438
> chunk uuid c9470ba2-49ac-4d46-8856-438a18e6bd23
> item 0 key (1073741824 RAID_STRIPE_KEY 131072) itemoff 16243 itemsize 56
> encoding: RAID0
> stripe 0 devid 1 offset 805306368 length 131072
> stripe 1 devid 2 offset 536870912 length 131072
> total bytes 42949672960
> bytes used 294912
> uuid ab05cfc6-9859-404e-970d-3999b1cb5438
>
> RAID1 case:
> bash-5.2# btrfs inspect-internal dump-tree -t raid_stripe /dev/nvme0n1
> btrfs-progs v6.3
> raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0)
> leaf 805535744 items 1 free space 16218 generation 8 owner RAID_STRIPE_TREE
> leaf 805535744 flags 0x1(WRITTEN) backref revision 1
> checksum stored 56199539
> checksum calced 56199539
> fs uuid 9e693a37-fbd1-4891-aed2-e7fe64605045
> chunk uuid 691874fc-1b9c-469b-bd7f-05e0e6ba88c4
> item 0 key (939524096 RAID_STRIPE_KEY 131072) itemoff 16243 itemsize 56
> encoding: RAID1
> stripe 0 devid 1 offset 939524096 length 65536
> stripe 1 devid 2 offset 536870912 length 65536
> total bytes 42949672960
> bytes used 294912
> uuid 9e693a37-fbd1-4891-aed2-e7fe64605045
>
> A design document can be found here:
> https://docs.google.com/document/d/1Iui_jMidCd4MVBNSSLXRfO7p5KmvnoQL/edit?usp=sharing&ouid=103609947580185458266&rtpof=true&sd=true

Please also turn it to developer documentation file (in
btrfs-progs/Documentation/dev), it can follow the same structure.

>
> The user-space part of this series can be found here:
> https://lore.kernel.org/linux-btrfs/[email protected]
>
> Changes to v8:
> - Changed tracepoints according to David's comments
> - Mark on-disk structures as packed
> - Got rid of __DECLARE_FLEX_ARRAY
> - Rebase onto misc-next
> - Split out helpers for new btrfs_load_block_group_zone_info RAID cases
> - Constify declarations where possible
> - Initialise variables before use
> - Lower scope of variables
> - Remove btrfs_stripe_root() helper
> - Pick different BTRFS_RAID_STRIPE_KEY constant
> - Reorder on-disk encoding types to match the raid_index
> - And possibly more, please git range-diff the versions
> - Link to v8: https://lore.kernel.org/r/[email protected]

v9 will be added as topic branch to for-next, I did several style
changes so please send any updates as incrementals if needed.

2023-09-15 06:07:40

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v9 07/11] btrfs: zoned: allow zoned RAID

On Thu, Sep 14, 2023 at 09:07:02AM -0700, Johannes Thumshirn wrote:
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -1397,9 +1397,11 @@ static int btrfs_load_block_group_dup(struct btrfs_block_group *bg,
> struct zone_info *zone_info,
> unsigned long *active)
> {
> - if (map->type & BTRFS_BLOCK_GROUP_DATA) {
> - btrfs_err(bg->fs_info,
> - "zoned: profile DUP not yet supported on data bg");
> + struct btrfs_fs_info *fs_info = bg->fs_info;
> +
> + if (map->type & BTRFS_BLOCK_GROUP_DATA &&
> + !fs_info->stripe_root) {
> + btrfs_err(fs_info, "zoned: data DUP profile needs stripe_root");

Using stripe_root for identifier is ok so we don't have overly long ones
but for user messages please use raid-stripe-tree. Fixed.

2023-09-15 10:18:51

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH v9 03/11] btrfs: add support for inserting raid stripe extents

Hi David,

On Thu, 14 Sep 2023, David Sterba wrote:
> On Thu, Sep 14, 2023 at 09:06:58AM -0700, Johannes Thumshirn wrote:
>> Add support for inserting stripe extents into the raid stripe tree on
>> completion of every write that needs an extra logical-to-physical
>> translation when using RAID.
>>
>> Inserting the stripe extents happens after the data I/O has completed,
>> this is done to a) support zone-append and b) rule out the possibility of
>> a RAID-write-hole.
>>
>> Signed-off-by: Johannes Thumshirn <[email protected]>

>> --- /dev/null
>> +++ b/fs/btrfs/raid-stripe-tree.c
>> +static int btrfs_insert_striped_mirrored_raid_extents(
>> + struct btrfs_trans_handle *trans,
>> + struct btrfs_ordered_extent *ordered,
>> + u64 map_type)
>> +{
>> + struct btrfs_io_context *bioc;
>> + struct btrfs_io_context *rbioc;
>> + const int nstripes = list_count_nodes(&ordered->bioc_list);
>> + const int index = btrfs_bg_flags_to_raid_index(map_type);
>> + const int substripes = btrfs_raid_array[index].sub_stripes;
>> + const int max_stripes =
>> + trans->fs_info->fs_devices->rw_devices / substripes;
>
> This will probably warn due to u64/u32 division.

Worse, it causes link failures in linux-next, as e.g. reported by
[email protected]:

ERROR: modpost: "__udivdi3" [fs/btrfs/btrfs.ko] undefined!

So despite being aware of the issue, you still queued it?

The use of "int" for almost all variables is also a red flag:
- list_count_nodes() returns size_t,
- btrfs_bg_flags_to_raid_index() returns an enum.
- btrfs_raid_array[index].sub_stripes is u8,
- The result of the division may not fit in 32-bit.

Thanks for fixing, soon! ;-)

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2023-09-15 11:44:14

by Qu Wenruo

[permalink] [raw]
Subject: Re: [PATCH v9 03/11] btrfs: add support for inserting raid stripe extents



On 2023/9/15 01:36, Johannes Thumshirn wrote:
> Add support for inserting stripe extents into the raid stripe tree on
> completion of every write that needs an extra logical-to-physical
> translation when using RAID.
>
> Inserting the stripe extents happens after the data I/O has completed,
> this is done to a) support zone-append and b) rule out the possibility of
> a RAID-write-hole.
>
> Signed-off-by: Johannes Thumshirn <[email protected]>
> ---
> fs/btrfs/Makefile | 2 +-
> fs/btrfs/bio.c | 23 +++++
> fs/btrfs/extent-tree.c | 1 +
> fs/btrfs/inode.c | 8 +-
> fs/btrfs/ordered-data.c | 1 +
> fs/btrfs/ordered-data.h | 2 +
> fs/btrfs/raid-stripe-tree.c | 245 ++++++++++++++++++++++++++++++++++++++++++++
> fs/btrfs/raid-stripe-tree.h | 34 ++++++
> fs/btrfs/volumes.c | 4 +-
> fs/btrfs/volumes.h | 15 +--
> 10 files changed, 326 insertions(+), 9 deletions(-)
>
> diff --git a/fs/btrfs/Makefile b/fs/btrfs/Makefile
> index c57d80729d4f..525af975f61c 100644
> --- a/fs/btrfs/Makefile
> +++ b/fs/btrfs/Makefile
> @@ -33,7 +33,7 @@ btrfs-y += super.o ctree.o extent-tree.o print-tree.o root-tree.o dir-item.o \
> uuid-tree.o props.o free-space-tree.o tree-checker.o space-info.o \
> block-rsv.o delalloc-space.o block-group.o discard.o reflink.o \
> subpage.o tree-mod-log.o extent-io-tree.o fs.o messages.o bio.o \
> - lru_cache.o
> + lru_cache.o raid-stripe-tree.o
>
> btrfs-$(CONFIG_BTRFS_FS_POSIX_ACL) += acl.o
> btrfs-$(CONFIG_BTRFS_FS_REF_VERIFY) += ref-verify.o
> diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
> index 31ff36990404..ddbe6f8d4ea2 100644
> --- a/fs/btrfs/bio.c
> +++ b/fs/btrfs/bio.c
> @@ -14,6 +14,7 @@
> #include "rcu-string.h"
> #include "zoned.h"
> #include "file-item.h"
> +#include "raid-stripe-tree.h"
>
> static struct bio_set btrfs_bioset;
> static struct bio_set btrfs_clone_bioset;
> @@ -415,6 +416,9 @@ static void btrfs_orig_write_end_io(struct bio *bio)
> else
> bio->bi_status = BLK_STS_OK;
>
> + if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
> + stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
> +
> btrfs_orig_bbio_end_io(bbio);
> btrfs_put_bioc(bioc);
> }
> @@ -426,6 +430,8 @@ static void btrfs_clone_write_end_io(struct bio *bio)
> if (bio->bi_status) {
> atomic_inc(&stripe->bioc->error);
> btrfs_log_dev_io_error(bio, stripe->dev);
> + } else if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
> + stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
> }
>
> /* Pass on control to the original bio this one was cloned from */
> @@ -487,6 +493,7 @@ static void btrfs_submit_mirrored_bio(struct btrfs_io_context *bioc, int dev_nr)
> bio->bi_private = &bioc->stripes[dev_nr];
> bio->bi_iter.bi_sector = bioc->stripes[dev_nr].physical >> SECTOR_SHIFT;
> bioc->stripes[dev_nr].bioc = bioc;
> + bioc->size = bio->bi_iter.bi_size;
> btrfs_submit_dev_bio(bioc->stripes[dev_nr].dev, bio);
> }
>
> @@ -496,6 +503,8 @@ static void __btrfs_submit_bio(struct bio *bio, struct btrfs_io_context *bioc,
> if (!bioc) {
> /* Single mirror read/write fast path. */
> btrfs_bio(bio)->mirror_num = mirror_num;
> + if (bio_op(bio) != REQ_OP_READ)
> + btrfs_bio(bio)->orig_physical = smap->physical;
> bio->bi_iter.bi_sector = smap->physical >> SECTOR_SHIFT;
> if (bio_op(bio) != REQ_OP_READ)
> btrfs_bio(bio)->orig_physical = smap->physical;
> @@ -688,6 +697,20 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
> bio->bi_opf |= REQ_OP_ZONE_APPEND;
> }
>
> + if (is_data_bbio(bbio) && bioc &&
> + btrfs_need_stripe_tree_update(bioc->fs_info,
> + bioc->map_type)) {
> + /*
> + * No locking for the list update, as we only add to
> + * the list in the I/O submission path, and list
> + * iteration only happens in the completion path,
> + * which can't happen until after the last submission.
> + */
> + btrfs_get_bioc(bioc);
> + list_add_tail(&bioc->ordered_entry,
> + &bbio->ordered->bioc_list);
> + }
> +
> /*
> * Csum items for reloc roots have already been cloned at this
> * point, so they are handled as part of the no-checksum case.
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index cb12bfb047e7..959d7449ea0d 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -42,6 +42,7 @@
> #include "file-item.h"
> #include "orphan.h"
> #include "tree-checker.h"
> +#include "raid-stripe-tree.h"
>
> #undef SCRAMBLE_DELAYED_REFS
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index e02a5ba5b533..b5e0ed3a36f7 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -71,6 +71,7 @@
> #include "super.h"
> #include "orphan.h"
> #include "backref.h"
> +#include "raid-stripe-tree.h"
>
> struct btrfs_iget_args {
> u64 ino;
> @@ -3091,6 +3092,10 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
>
> trans->block_rsv = &inode->block_rsv;
>
> + ret = btrfs_insert_raid_extent(trans, ordered_extent);
> + if (ret)
> + goto out;
> +
> if (test_bit(BTRFS_ORDERED_COMPRESSED, &ordered_extent->flags))
> compress_type = ordered_extent->compress_type;
> if (test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags)) {
> @@ -3224,7 +3229,8 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
> int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered)
> {
> if (btrfs_is_zoned(btrfs_sb(ordered->inode->i_sb)) &&
> - !test_bit(BTRFS_ORDERED_IOERR, &ordered->flags))
> + !test_bit(BTRFS_ORDERED_IOERR, &ordered->flags) &&
> + list_empty(&ordered->bioc_list))
> btrfs_finish_ordered_zoned(ordered);
> return btrfs_finish_one_ordered(ordered);
> }
> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
> index 345c449d588c..55c7d5543265 100644
> --- a/fs/btrfs/ordered-data.c
> +++ b/fs/btrfs/ordered-data.c
> @@ -191,6 +191,7 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
> INIT_LIST_HEAD(&entry->log_list);
> INIT_LIST_HEAD(&entry->root_extent_list);
> INIT_LIST_HEAD(&entry->work_list);
> + INIT_LIST_HEAD(&entry->bioc_list);
> init_completion(&entry->completion);
>
> /*
> diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
> index 173bd5c5df26..1c51ac57e5df 100644
> --- a/fs/btrfs/ordered-data.h
> +++ b/fs/btrfs/ordered-data.h
> @@ -151,6 +151,8 @@ struct btrfs_ordered_extent {
> struct completion completion;
> struct btrfs_work flush_work;
> struct list_head work_list;
> +
> + struct list_head bioc_list;
> };
>
> static inline void
> diff --git a/fs/btrfs/raid-stripe-tree.c b/fs/btrfs/raid-stripe-tree.c
> new file mode 100644
> index 000000000000..7cdcc45a8796
> --- /dev/null
> +++ b/fs/btrfs/raid-stripe-tree.c
> @@ -0,0 +1,245 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2023 Western Digital Corporation or its affiliates.
> + */
> +
> +#include <linux/btrfs_tree.h>
> +
> +#include "ctree.h"
> +#include "fs.h"
> +#include "accessors.h"
> +#include "transaction.h"
> +#include "disk-io.h"
> +#include "raid-stripe-tree.h"
> +#include "volumes.h"
> +#include "misc.h"
> +#include "print-tree.h"
> +
> +static int btrfs_insert_one_raid_extent(struct btrfs_trans_handle *trans,
> + int num_stripes,
> + struct btrfs_io_context *bioc)
> +{
> + struct btrfs_fs_info *fs_info = trans->fs_info;
> + struct btrfs_key stripe_key;
> + struct btrfs_root *stripe_root = fs_info->stripe_root;
> + u8 encoding = btrfs_bg_flags_to_raid_index(bioc->map_type);
> + struct btrfs_stripe_extent *stripe_extent;
> + const size_t item_size = struct_size(stripe_extent, strides, num_stripes);
> + int ret;
> +
> + stripe_extent = kzalloc(item_size, GFP_NOFS);
> + if (!stripe_extent) {
> + btrfs_abort_transaction(trans, -ENOMEM);
> + btrfs_end_transaction(trans);
> + return -ENOMEM;
> + }
> +
> + btrfs_set_stack_stripe_extent_encoding(stripe_extent, encoding);
> + for (int i = 0; i < num_stripes; i++) {
> + u64 devid = bioc->stripes[i].dev->devid;
> + u64 physical = bioc->stripes[i].physical;
> + u64 length = bioc->stripes[i].length;
> + struct btrfs_raid_stride *raid_stride =
> + &stripe_extent->strides[i];
> +
> + if (length == 0)
> + length = bioc->size;
> +
> + btrfs_set_stack_raid_stride_devid(raid_stride, devid);
> + btrfs_set_stack_raid_stride_physical(raid_stride, physical);
> + btrfs_set_stack_raid_stride_length(raid_stride, length);
> + }
> +
> + stripe_key.objectid = bioc->logical;
> + stripe_key.type = BTRFS_RAID_STRIPE_KEY;
> + stripe_key.offset = bioc->size;
> +
> + ret = btrfs_insert_item(trans, stripe_root, &stripe_key, stripe_extent,
> + item_size);
> + if (ret)
> + btrfs_abort_transaction(trans, ret);
> +
> + kfree(stripe_extent);
> +
> + return ret;
> +}
> +
> +static int btrfs_insert_mirrored_raid_extents(struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered,
> + u64 map_type)
> +{
> + int num_stripes = btrfs_bg_type_to_factor(map_type);
> + struct btrfs_io_context *bioc;
> + int ret;
> +
> + list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
> + ret = btrfs_insert_one_raid_extent(trans, num_stripes, bioc);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int btrfs_insert_striped_mirrored_raid_extents(
> + struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered,
> + u64 map_type)
> +{
> + struct btrfs_io_context *bioc;
> + struct btrfs_io_context *rbioc;
> + const int nstripes = list_count_nodes(&ordered->bioc_list);
> + const int index = btrfs_bg_flags_to_raid_index(map_type);
> + const int substripes = btrfs_raid_array[index].sub_stripes;
> + const int max_stripes =
> + trans->fs_info->fs_devices->rw_devices / substripes;
> + int left = nstripes;
> + int i;
> + int ret = 0;
> + u64 stripe_end;
> + u64 prev_end;
> +
> + if (nstripes == 1)
> + return btrfs_insert_mirrored_raid_extents(trans, ordered, map_type);
> +
> + rbioc = kzalloc(struct_size(rbioc, stripes, nstripes * substripes),
> + GFP_NOFS);
> + if (!rbioc)
> + return -ENOMEM;
> +
> + rbioc->map_type = map_type;
> + rbioc->logical = list_first_entry(&ordered->bioc_list, typeof(*rbioc),
> + ordered_entry)->logical;
> +
> + stripe_end = rbioc->logical;
> + prev_end = stripe_end;
> + i = 0;
> + list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
> +
> + rbioc->size += bioc->size;
> + for (int j = 0; j < substripes; j++) {
> + int stripe = i + j;
> + rbioc->stripes[stripe].dev = bioc->stripes[j].dev;
> + rbioc->stripes[stripe].physical = bioc->stripes[j].physical;
> + rbioc->stripes[stripe].length = bioc->size;
> + }
> +
> + stripe_end += rbioc->size;
> + if (i >= nstripes ||
> + (stripe_end - prev_end >= max_stripes * BTRFS_STRIPE_LEN)) {
> + ret = btrfs_insert_one_raid_extent(trans,
> + nstripes * substripes,
> + rbioc);
> + if (ret)
> + goto out;
> +
> + left -= nstripes;
> + i = 0;
> + rbioc->logical += rbioc->size;
> + rbioc->size = 0;
> + } else {
> + i += substripes;
> + prev_end = stripe_end;
> + }
> + }
> +
> + if (left) {
> + bioc = list_prev_entry(bioc, ordered_entry);
> + ret = btrfs_insert_one_raid_extent(trans, substripes, bioc);
> + }
> +
> +out:
> + kfree(rbioc);
> + return ret;
> +}
> +
> +static int btrfs_insert_striped_raid_extents(struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered,
> + u64 map_type)
> +{
> + struct btrfs_io_context *bioc;
> + struct btrfs_io_context *rbioc;
> + const int nstripes = list_count_nodes(&ordered->bioc_list);
> + int i;
> + int ret = 0;
> +
> + rbioc = kzalloc(struct_size(rbioc, stripes, nstripes), GFP_NOFS);
> + if (!rbioc)
> + return -ENOMEM;
> + rbioc->map_type = map_type;
> + rbioc->logical = list_first_entry(&ordered->bioc_list, typeof(*rbioc),
> + ordered_entry)->logical;
> +
> + i = 0;
> + list_for_each_entry(bioc, &ordered->bioc_list, ordered_entry) {
> + rbioc->size += bioc->size;
> + rbioc->stripes[i].dev = bioc->stripes[0].dev;
> + rbioc->stripes[i].physical = bioc->stripes[0].physical;
> + rbioc->stripes[i].length = bioc->size;
> +
> + if (i == nstripes - 1) {
> + ret = btrfs_insert_one_raid_extent(trans, nstripes, rbioc);
> + if (ret)
> + goto out;
> +
> + i = 0;
> + rbioc->logical += rbioc->size;
> + rbioc->size = 0;
> + } else {
> + i++;
> + }
> + }
> +
> + if (i && i < nstripes - 1)
> + ret = btrfs_insert_one_raid_extent(trans, i, rbioc);
> +
> +out:
> + kfree(rbioc);
> + return ret;
> +}
> +
> +int btrfs_insert_raid_extent(struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered_extent)
> +{
> + struct btrfs_io_context *bioc;
> + u64 map_type;
> + int ret;
> +
> + if (!trans->fs_info->stripe_root)
> + return 0;
> +
> + map_type = list_first_entry(&ordered_extent->bioc_list, typeof(*bioc),
> + ordered_entry)->map_type;
> +
> + switch (map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
> + case BTRFS_BLOCK_GROUP_DUP:
> + case BTRFS_BLOCK_GROUP_RAID1:
> + case BTRFS_BLOCK_GROUP_RAID1C3:
> + case BTRFS_BLOCK_GROUP_RAID1C4:
> + ret = btrfs_insert_mirrored_raid_extents(trans, ordered_extent,
> + map_type);
> + break;
> + case BTRFS_BLOCK_GROUP_RAID0:
> + ret = btrfs_insert_striped_raid_extents(trans, ordered_extent,
> + map_type);
> + break;
> + case BTRFS_BLOCK_GROUP_RAID10:
> + ret = btrfs_insert_striped_mirrored_raid_extents(trans, ordered_extent, map_type);
> + break;
> + default:
> + btrfs_err(trans->fs_info, "unknown block-group profile %lld",
> + map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
> + ASSERT(0);
> + ret = -EINVAL;
> + break;
> + }
> +
> + while (!list_empty(&ordered_extent->bioc_list)) {
> + bioc = list_first_entry(&ordered_extent->bioc_list,
> + typeof(*bioc), ordered_entry);
> + list_del(&bioc->ordered_entry);
> + btrfs_put_bioc(bioc);
> + }
> +
> + return ret;
> +}
> diff --git a/fs/btrfs/raid-stripe-tree.h b/fs/btrfs/raid-stripe-tree.h
> new file mode 100644
> index 000000000000..884f0e99d5e8
> --- /dev/null
> +++ b/fs/btrfs/raid-stripe-tree.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2023 Western Digital Corporation or its affiliates.
> + */
> +
> +#ifndef BTRFS_RAID_STRIPE_TREE_H
> +#define BTRFS_RAID_STRIPE_TREE_H
> +
> +struct btrfs_io_context;
> +struct btrfs_io_stripe;
> +struct btrfs_ordered_extent;
> +struct btrfs_trans_handle;
> +
> +int btrfs_insert_raid_extent(struct btrfs_trans_handle *trans,
> + struct btrfs_ordered_extent *ordered_extent);
> +
> +static inline bool btrfs_need_stripe_tree_update(struct btrfs_fs_info *fs_info,
> + u64 map_type)
> +{
> + u64 type = map_type & BTRFS_BLOCK_GROUP_TYPE_MASK;
> + u64 profile = map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
> +
> + if (!fs_info->stripe_root)
> + return false;

I found a corncer case that this can be problematic.

If we have a fs with RST root tree node/leaf corrupted, mounted with
rescue=ibadroots, then fs_info->stripe_root would be NULL, and in the
5th patch inside set_io_stripe() we just fall back to regular non-RST path.
This would bring us mostly incorrect data (and can be very problematic
for nodatacsum files).

Thus stripe_root itself is not a reliable way to determine if we're at
RST routine, I'd say only super incompat flags is reliable.

And fs_info->stripe_root should only be checked for functions that do
RST tree operations, and return -EIO properly if it's not initialized.

> +
> + if (type != BTRFS_BLOCK_GROUP_DATA)
> + return false;
> +
> + if (profile & BTRFS_BLOCK_GROUP_RAID1_MASK)
> + return true;

Just a stupid quest, RAID0 DATA doesn't need RST purely because they are
the same as SINGLE, thus we only update the file items to the real
written logical address, and no need for the extra mapping?

Thus only profiles with duplication relies on RST, right?
If so, then I guess DUP should also be covered by RST.

> +
> + return false;
> +}
> +#endif
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index a1eae8b5b412..c2bac87912c7 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -5984,6 +5984,7 @@ static int find_live_mirror(struct btrfs_fs_info *fs_info,
> }
>
> static struct btrfs_io_context *alloc_btrfs_io_context(struct btrfs_fs_info *fs_info,
> + u64 logical,
> u16 total_stripes)
> {
> struct btrfs_io_context *bioc;
> @@ -6003,6 +6004,7 @@ static struct btrfs_io_context *alloc_btrfs_io_context(struct btrfs_fs_info *fs_
> bioc->fs_info = fs_info;
> bioc->replace_stripe_src = -1;
> bioc->full_stripe_logical = (u64)-1;
> + bioc->logical = logical;
>
> return bioc;
> }
> @@ -6537,7 +6539,7 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
> goto out;
> }
>
> - bioc = alloc_btrfs_io_context(fs_info, num_alloc_stripes);
> + bioc = alloc_btrfs_io_context(fs_info, logical, num_alloc_stripes);
> if (!bioc) {
> ret = -ENOMEM;
> goto out;
> diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
> index 26397adc8706..2043aff6e966 100644
> --- a/fs/btrfs/volumes.h
> +++ b/fs/btrfs/volumes.h
> @@ -390,12 +390,11 @@ struct btrfs_fs_devices {
>
> struct btrfs_io_stripe {
> struct btrfs_device *dev;
> - union {
> - /* Block mapping */
> - u64 physical;
> - /* For the endio handler */
> - struct btrfs_io_context *bioc;
> - };
> + /* Block mapping */
> + u64 physical;
> + u64 length;
> + /* For the endio handler */
> + struct btrfs_io_context *bioc;
> };
>
> struct btrfs_discard_stripe {
> @@ -428,6 +427,10 @@ struct btrfs_io_context {
> atomic_t error;
> u16 max_errors;
>
> + u64 logical;
> + u64 size;
> + struct list_head ordered_entry;

Considering this is only utlized by RST, can we rename it to be more
specific?
Like rst_ordered_entry?

Or I'm pretty sure just weeks later I would need to dig to see what this
list is used for.

Thanks,
Qu
> +
> /*
> * The total number of stripes, including the extra duplicated
> * stripe for replace.
>

2023-09-15 17:29:38

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v9 03/11] btrfs: add support for inserting raid stripe extents

On Thu, Sep 14, 2023 at 09:06:58AM -0700, Johannes Thumshirn wrote:
> + map_type = list_first_entry(&ordered_extent->bioc_list, typeof(*bioc),
> + ordered_entry)->map_type;
> +
> + switch (map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
> + case BTRFS_BLOCK_GROUP_DUP:
> + case BTRFS_BLOCK_GROUP_RAID1:
> + case BTRFS_BLOCK_GROUP_RAID1C3:
> + case BTRFS_BLOCK_GROUP_RAID1C4:
> + ret = btrfs_insert_mirrored_raid_extents(trans, ordered_extent,
> + map_type);
> + break;
> + case BTRFS_BLOCK_GROUP_RAID0:
> + ret = btrfs_insert_striped_raid_extents(trans, ordered_extent,
> + map_type);
> + break;
> + case BTRFS_BLOCK_GROUP_RAID10:
> + ret = btrfs_insert_striped_mirrored_raid_extents(trans, ordered_extent, map_type);
> + break;
> + default:
> + btrfs_err(trans->fs_info, "unknown block-group profile %lld",
> + map_type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
> + ASSERT(0);

Please don't use ASSERT(0), the error is handled and no need to crash
here.

> + ret = -EINVAL;
> + break;
> + }

2023-09-19 20:35:43

by Johannes Thumshirn

[permalink] [raw]
Subject: Re: [PATCH v9 03/11] btrfs: add support for inserting raid stripe extents

On 15.09.23 02:55, Qu Wenruo wrote:
>
> I found a corncer case that this can be problematic.
>
> If we have a fs with RST root tree node/leaf corrupted, mounted with
> rescue=ibadroots, then fs_info->stripe_root would be NULL, and in the
> 5th patch inside set_io_stripe() we just fall back to regular non-RST path.
> This would bring us mostly incorrect data (and can be very problematic
> for nodatacsum files).
>
> Thus stripe_root itself is not a reliable way to determine if we're at
> RST routine, I'd say only super incompat flags is reliable.

Fixed.

>
> And fs_info->stripe_root should only be checked for functions that do
> RST tree operations, and return -EIO properly if it's not initialized.



>> +
>> + if (type != BTRFS_BLOCK_GROUP_DATA)
>> + return false;
>> +
>> + if (profile & BTRFS_BLOCK_GROUP_RAID1_MASK)
>> + return true;
>
> Just a stupid quest, RAID0 DATA doesn't need RST purely because they are
> the same as SINGLE, thus we only update the file items to the real
> written logical address, and no need for the extra mapping?

Yes but there can still be discrepancies between the assumed physical
address and the real one due to ZONE_APPEND operations. RST backed file
systems don't go the "normal" zoned btrfs logical rewrite path but have
their own.

Also I prefere to keep the stripes together.

> Thus only profiles with duplication relies on RST, right?
> If so, then I guess DUP should also be covered by RST.
>

Later in this patches, DUP, RAID0 and RAID10 will get added as well.

2023-09-23 02:46:42

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v9 00/11] btrfs: introduce RAID stripe tree

On Thu, Sep 14, 2023 at 08:25:34PM +0200, David Sterba wrote:
> On Thu, Sep 14, 2023 at 09:06:55AM -0700, Johannes Thumshirn wrote:
> > Updates of the raid-stripe-tree are done at ordered extent write time to safe
> > on bandwidth while for reading we do the stripe-tree lookup on bio mapping
> > time, i.e. when the logical to physical translation happens for regular btrfs
> > RAID as well.
> >
> > The stripe tree is keyed by an extent's disk_bytenr and disk_num_bytes and
> > it's contents are the respective physical device id and position.
> >
> > For an example 1M write (split into 126K segments due to zone-append)
> > rapido2:/home/johannes/src/fstests# xfs_io -fdc "pwrite -b 1M 0 1M" -c fsync /mnt/test/test
> > wrote 1048576/1048576 bytes at offset 0
> > 1 MiB, 1 ops; 0.0065 sec (151.538 MiB/sec and 151.5381 ops/sec)
> >
> > The tree will look as follows (both 128k buffered writes to a ZNS drive):
> >
> > RAID0 case:
> > bash-5.2# btrfs inspect-internal dump-tree -t raid_stripe /dev/nvme0n1
> > btrfs-progs v6.3
> > raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0)
> > leaf 805535744 items 1 free space 16218 generation 8 owner RAID_STRIPE_TREE
> > leaf 805535744 flags 0x1(WRITTEN) backref revision 1
> > checksum stored 2d2d2262
> > checksum calced 2d2d2262
> > fs uuid ab05cfc6-9859-404e-970d-3999b1cb5438
> > chunk uuid c9470ba2-49ac-4d46-8856-438a18e6bd23
> > item 0 key (1073741824 RAID_STRIPE_KEY 131072) itemoff 16243 itemsize 56
> > encoding: RAID0
> > stripe 0 devid 1 offset 805306368 length 131072
> > stripe 1 devid 2 offset 536870912 length 131072
> > total bytes 42949672960
> > bytes used 294912
> > uuid ab05cfc6-9859-404e-970d-3999b1cb5438
> >
> > RAID1 case:
> > bash-5.2# btrfs inspect-internal dump-tree -t raid_stripe /dev/nvme0n1
> > btrfs-progs v6.3
> > raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0)
> > leaf 805535744 items 1 free space 16218 generation 8 owner RAID_STRIPE_TREE
> > leaf 805535744 flags 0x1(WRITTEN) backref revision 1
> > checksum stored 56199539
> > checksum calced 56199539
> > fs uuid 9e693a37-fbd1-4891-aed2-e7fe64605045
> > chunk uuid 691874fc-1b9c-469b-bd7f-05e0e6ba88c4
> > item 0 key (939524096 RAID_STRIPE_KEY 131072) itemoff 16243 itemsize 56
> > encoding: RAID1
> > stripe 0 devid 1 offset 939524096 length 65536
> > stripe 1 devid 2 offset 536870912 length 65536
> > total bytes 42949672960
> > bytes used 294912
> > uuid 9e693a37-fbd1-4891-aed2-e7fe64605045
> >
> > A design document can be found here:
> > https://docs.google.com/document/d/1Iui_jMidCd4MVBNSSLXRfO7p5KmvnoQL/edit?usp=sharing&ouid=103609947580185458266&rtpof=true&sd=true
>
> Please also turn it to developer documentation file (in
> btrfs-progs/Documentation/dev), it can follow the same structure.
>
> >
> > The user-space part of this series can be found here:
> > https://lore.kernel.org/linux-btrfs/[email protected]
> >
> > Changes to v8:
> > - Changed tracepoints according to David's comments
> > - Mark on-disk structures as packed
> > - Got rid of __DECLARE_FLEX_ARRAY
> > - Rebase onto misc-next
> > - Split out helpers for new btrfs_load_block_group_zone_info RAID cases
> > - Constify declarations where possible
> > - Initialise variables before use
> > - Lower scope of variables
> > - Remove btrfs_stripe_root() helper
> > - Pick different BTRFS_RAID_STRIPE_KEY constant
> > - Reorder on-disk encoding types to match the raid_index
> > - And possibly more, please git range-diff the versions
> > - Link to v8: https://lore.kernel.org/r/[email protected]
>
> v9 will be added as topic branch to for-next, I did several style
> changes so please send any updates as incrementals if needed.

Moved to misc-next. I'll do a minor release of btrfs-progs soon so we
get the tool support for testing.