2022-05-09 01:30:33

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 00/11] support non power of 2 zoned devices

- Background and Motivation:

The zone storage implementation in Linux, introduced since v4.10, first
targetted SMR drives which have a power of 2 (po2) zone size alignment
requirement. The po2 zone size was further imposed implicitly by the
block layer's blk_queue_chunk_sectors(), used to prevent IO merging
across chunks beyond the specified size, since v3.16 through commit
762380ad9322 ("block: add notion of a chunk size for request merging").
But this same general block layer po2 requirement for blk_queue_chunk_sectors()
was removed on v5.10 through commit 07d098e6bbad ("block: allow 'chunk_sectors'
to be non-power-of-2").

NAND, which is the media used in newer zoned storage devices, does not
naturally align to po2. In these devices, zone cap is not the same as the
po2 zone size. When the zone cap != zone size, then unmapped LBAs are
introduced to cover the space between the zone cap and zone size. po2
requirement does not make sense for these type of zone storage devices.
This patch series aims to remove these unmapped LBAs for zoned devices when
zone cap is npo2. This is done by relaxing the po2 zone size constraint
in the kernel and allowing zoned device with npo2 zone sizes if zone cap
== zone size.

Removing the po2 requirement from zone storage should be possible
now provided that no userspace regression and no performance regressions are
introduced. Stop-gap patches have been already merged into f2fs-tools to
proactively not allow npo2 zone sizes until proper support is added [0].
Additional kernel stop-gap patches are provided in this series for dm-zoned.
Support for npo2 zonefs and btrfs support is addressed in this series.

There was an effort previously [1] to add support to non po2 devices via
device level emulation but that was rejected with a final conclusion
to add support for non po2 zoned device in the complete stack[2].

- Patchset description:
This patchset aims at adding support to non power of 2 zoned devices in
the block layer, nvme layer, null blk and adds support to btrfs and
zonefs.

This round of patches **will not** support DM layer for non
power of 2 zoned devices. More about this in the future work section.

Patches 1-2 deals with removing the po2 constraint from the
block layer.

Patches 3-4 deals with removing the constraint from nvme zns.

Patches 5-8 adds support to btrfs for non po2 zoned devices.

Patch 9 removes the po2 constraint in ZoneFS

Patch 10 removes the po2 contraint in null blk

Patches 11 adds conditions to not allow non power of 2 devices in
DM.

The patch series is based on linux-next tag: next-20220502

- Performance:
PO2 zone sizes utilizes log and shifts instead of division when
determing alignment, zone number, etc. The same math cannot be used when
using a zoned device with non po2 zone size. Hence, to avoid any performance
regression on zoned devices with po2 zone sizes, the optimized math in the
hot paths has been retained with branching.

The performance was measured using null blk for regression
and the results have been posted in the appropriate commit log. No
performance regression was noticed.

- Testing
With respect to testing we need to tackle two things: one for regression
on po2 zoned device and progression on non po2 zoned devices.

kdevops (https://github.com/mcgrof/kdevops) was extensively used to
automate the testing for blktests and (x)fstests for btrfs changes. The
known failures were excluded during the test based on the baseline
v5.17.0-rc7

-- regression
Emulated zoned device with zone size =128M , nr_zones = 10000

Block and nvme zns:
blktests were run with no new failures

Btrfs:
Changes were tested with the following profile in QEMU:
[btrfs_simple_zns]
TEST_DIR=<dir>
SCRATCH_MNT=<mnt>
FSTYP=btrfs
MKFS_OPTIONS="-f -d single -m single"
TEST_DEV=<dev>
SCRATCH_DEV_POOL=<dev-pool>

No new failures were observed in btrfs, generic and shared test suite

ZoneFS:
zonefs-tests-nullblk.sh and zonefs-tests.sh from zonefs-tools were run
with no failures.

nullblk:
t/zbd/run-tests-against-nullb from fio was run with no failures.

DM:
It was verified if dm-zoned successfully mounts without any
error.

-- progression
Emulated zoned device with zone size = 96M , nr_zones = 10000

Block and nvme zns:
blktests were run with no new failures

Btrfs:
Same profile as po2 zone size was used.

Many tests in xfstests for btrfs included dm-flakey and some tests
required dm-linear. As they are not supported at the moment for non
po2 devices, those **tests were excluded for non po2 devices**.

No new failures were observed in btrfs, generic and shared test suite

ZoneFS:
zonefs-tests.sh from zonefs-tools were run with no failures.

nullblk:
A new section was added to cover non po2 devices:

section14()
{
conv_pcnt=10
zone_size=3
zone_capacity=3
max_open=${set_max_open}
zbd_test_opts+=("-o ${max_open}")
}
t/zbd/run-tests-against-nullb from fio was run with no failures.

DM:
It was verified that dm-zoned does not mount.

- Open issue:
* btrfs superblock location for zoned devices is expected to be in 0,
512GB(mirror) and 4TB(mirror) in the device. Zoned devices with po2
zone size will naturally align with these superblock location but non
po2 devices will not align with 512GB and 4TB offset.

The current approach for npo2 devices is to place the superblock mirror
zones near 512GB and 4TB that is **aligned to the zone size**. This
is of no issue for normal operation as we keep track where the superblock
mirror are placed but this can cause an issue with recovery tools for
zoned devices as they expect mirror superblock to be in 512GB and 4TB.

Note that ATM, recovery tools such as `btrfs check` does not work for
image dumps for zoned devices even for po2 zone sizes.

- Tools:
Some tools had to be updated to support non po2 devices. Once these
patches are accepted in the kernel, these tool updates will also be
upstreamed.
* btrfs-prog: https://github.com/Panky-codes/btrfs-progs/tree/remove-po2-btrfs
* blkzone: https://github.com/Panky-codes/util-linux/tree/remove-po2
* zonefs-tools: https://github.com/Panky-codes/zonefs-tools/tree/remove-po2

- Future work
To reduce the amount of changes and testing, support for DM was
excluded in this round of patches. The plan is to add support to F2FS
and DM in the forthcoming future.

[0] https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git/commit/?h=dev-test&id=6afcf6493578e77528abe65ab8b12f3e1c16749f
[1] https://lore.kernel.org/all/[email protected]/T/
[2] https://lore.kernel.org/all/20220315135245.eqf4tqngxxb7ymqa@unifi/

Changes since v1:
- Put the function declaration and its usage in the same commit (Bart)
- Remove bdev_zone_aligned function (Bart)
- Change the name from blk_queue_zone_aligned to blk_queue_is_zone_start
(Damien)
- q is never null in from bdev_get_queue (Damien)
- Add condition during bringup and check for zsze == zcap for npo2
drives (Damien)
- Rounddown operation should be made generic to work in 32 bits arch
(bart)
- Add comments where generic calculation is directly used instead having
special handling for po2 zone sizes (Hannes)
- Make the minimum zone size alignment requirement for btrfs to be 1M
instead of BTRFS_STRIPE_LEN(David)

Changes since v2:
- Minor formatting changes

Luis Chamberlain (1):
dm-zoned: ensure only power of 2 zone sizes are allowed

Pankaj Raghav (10):
block: make blkdev_nr_zones and blk_queue_zone_no generic for npo2
zsze
block: allow blk-zoned devices to have non-power-of-2 zone size
nvme: zns: Allow ZNS drives that have non-power_of_2 zone size
nvmet: Allow ZNS target to support non-power_of_2 zone sizes
btrfs: zoned: Cache superblock location in btrfs_zoned_device_info
btrfs: zoned: Make sb_zone_number function non power of 2 compatible
btrfs: zoned: use generic btrfs zone helpers to support npo2 zoned
devices
btrfs: zoned: relax the alignment constraint for zoned devices
zonefs: allow non power of 2 zoned devices
null_blk: allow non power of 2 zoned devices

block/blk-core.c | 3 +-
block/blk-zoned.c | 40 ++++++++---
drivers/block/null_blk/main.c | 5 +-
drivers/block/null_blk/zoned.c | 14 ++--
drivers/md/dm-zone.c | 12 ++++
drivers/nvme/host/zns.c | 24 ++++---
drivers/nvme/target/zns.c | 2 +-
fs/btrfs/volumes.c | 24 ++++---
fs/btrfs/zoned.c | 123 ++++++++++++++++++---------------
fs/btrfs/zoned.h | 44 ++++++++++--
fs/zonefs/super.c | 6 +-
fs/zonefs/zonefs.h | 1 -
include/linux/blkdev.h | 37 +++++++++-
13 files changed, 228 insertions(+), 107 deletions(-)

--
2.25.1



2022-05-09 02:48:52

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v3 00/11] support non power of 2 zoned devices

On Fri, May 06, 2022 at 10:10:54AM +0200, Pankaj Raghav wrote:
> - Open issue:
> * btrfs superblock location for zoned devices is expected to be in 0,
> 512GB(mirror) and 4TB(mirror) in the device. Zoned devices with po2
> zone size will naturally align with these superblock location but non
> po2 devices will not align with 512GB and 4TB offset.
>
> The current approach for npo2 devices is to place the superblock mirror
> zones near 512GB and 4TB that is **aligned to the zone size**.

I don't like that, the offsets have been chosen so the values are fixed
and also future proof in case the zone size increases significantly. The
natural alignment of the pow2 zones makes it fairly trivial.

If I understand correctly what you suggest, it would mean that if zone
is eg. 5G and starts at 510G then the superblock should start at 510G,
right? And with another device that has 7G zone size the nearest
multiple is 511G. And so on.

That makes it all less predictable, depending on the physical device
constraints that are affecting the logical data structures of the
filesystem. We tried to avoid that with pow2, the only thing that
depends on the device is that the range from the super block offsets is
always 2 zones.

I really want to keep the offsets for all zoned devices the same and
adapt the code that's handling the writes. This is possible with the
non-pow2 too, the first write is set to the expected offset, leaving the
beginning of the zone unused.

> This
> is of no issue for normal operation as we keep track where the superblock
> mirror are placed but this can cause an issue with recovery tools for
> zoned devices as they expect mirror superblock to be in 512GB and 4TB.

Yeah the tools need to be updated, btrfs-progs and suite of blk* in
util-linux.

> Note that ATM, recovery tools such as `btrfs check` does not work for
> image dumps for zoned devices even for po2 zone sizes.

I thought this worked, but if you find something that does not please
report that to Johannes or Naohiro.

2022-05-09 02:56:25

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 04/11] nvmet: Allow ZNS target to support non-power_of_2 zone sizes

A generic bdev_zone_no helper is added to calculate zone number for a given
sector in a block device. This helper internally uses blk_queue_zone_no to
find the zone number.

Use the helper bdev_zone_no() to calculate nr of zones. This let's us
make modifications to the math if needed in one place and adds now
support for npo2 zone devices.

Reviewed by: Adam Manzanares <[email protected]>
Reviewed-by: Bart Van Assche <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Luis Chamberlain <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
drivers/nvme/target/zns.c | 2 +-
include/linux/blkdev.h | 7 +++++++
2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index 82b61acf7..5516dd6cc 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -242,7 +242,7 @@ static unsigned long nvmet_req_nr_zones_from_slba(struct nvmet_req *req)
unsigned int sect = nvmet_lba_to_sect(req->ns, req->cmd->zmr.slba);

return blkdev_nr_zones(req->ns->bdev->bd_disk) -
- (sect >> ilog2(bdev_zone_sectors(req->ns->bdev)));
+ bdev_zone_no(req->ns->bdev, sect);
}

static unsigned long get_nr_zones_from_buf(struct nvmet_req *req, u32 bufsize)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 32d7bd7b1..967790f51 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1370,6 +1370,13 @@ static inline sector_t bdev_zone_sectors(struct block_device *bdev)
return 0;
}

+static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec)
+{
+ struct request_queue *q = bdev_get_queue(bdev);
+
+ return blk_queue_zone_no(q, sec);
+}
+
static inline unsigned int bdev_max_open_zones(struct block_device *bdev)
{
struct request_queue *q = bdev_get_queue(bdev);
--
2.25.1


2022-05-09 03:23:49

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 07/11] btrfs: zoned: use generic btrfs zone helpers to support npo2 zoned devices

Add helpers to calculate alignment, round up and round down
for zoned devices. These helpers encapsulates the necessary handling for
power_of_2 and non-power_of_2 zone sizes. Optimized calculations are
performed for zone sizes that are power_of_2 with log and shifts.

btrfs_zoned_is_aligned() is added instead of reusing bdev_zone_aligned()
helper due to some use cases in btrfs where zone alignment is checked
before having access to the underlying block device such as in this
function: btrfs_load_block_group_zone_info().

Use the generic btrfs zone helpers to calculate zone index, check zone
alignment, round up and round down operations.

The zone_size_shift field is not needed anymore as generic helpers are
used for calculation.

Reviewed-by: Luis Chamberlain <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
fs/btrfs/volumes.c | 24 +++++++++-------
fs/btrfs/zoned.c | 72 ++++++++++++++++++++++------------------------
fs/btrfs/zoned.h | 43 +++++++++++++++++++++++----
3 files changed, 85 insertions(+), 54 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 94f851592..3d6b9a25a 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -1408,7 +1408,7 @@ static u64 dev_extent_search_start(struct btrfs_device *device, u64 start)
* allocator, because we anyway use/reserve the first two zones
* for superblock logging.
*/
- return ALIGN(start, device->zone_info->zone_size);
+ return btrfs_zoned_roundup(start, device->zone_info->zone_size);
default:
BUG();
}
@@ -1423,7 +1423,7 @@ static bool dev_extent_hole_check_zoned(struct btrfs_device *device,
int ret;
bool changed = false;

- ASSERT(IS_ALIGNED(*hole_start, zone_size));
+ ASSERT(btrfs_zoned_is_aligned(*hole_start, zone_size));

while (*hole_size > 0) {
pos = btrfs_find_allocatable_zones(device, *hole_start,
@@ -1560,7 +1560,7 @@ static int find_free_dev_extent_start(struct btrfs_device *device,
search_start = dev_extent_search_start(device, search_start);

WARN_ON(device->zone_info &&
- !IS_ALIGNED(num_bytes, device->zone_info->zone_size));
+ !btrfs_zoned_is_aligned(num_bytes, device->zone_info->zone_size));

path = btrfs_alloc_path();
if (!path)
@@ -5111,8 +5111,8 @@ static void init_alloc_chunk_ctl_policy_zoned(

ctl->max_stripe_size = zone_size;
if (type & BTRFS_BLOCK_GROUP_DATA) {
- ctl->max_chunk_size = round_down(BTRFS_MAX_DATA_CHUNK_SIZE,
- zone_size);
+ ctl->max_chunk_size = btrfs_zoned_rounddown(
+ BTRFS_MAX_DATA_CHUNK_SIZE, zone_size);
} else if (type & BTRFS_BLOCK_GROUP_METADATA) {
ctl->max_chunk_size = ctl->max_stripe_size;
} else if (type & BTRFS_BLOCK_GROUP_SYSTEM) {
@@ -5124,9 +5124,10 @@ static void init_alloc_chunk_ctl_policy_zoned(
}

/* We don't want a chunk larger than 10% of writable space */
- limit = max(round_down(div_factor(fs_devices->total_rw_bytes, 1),
- zone_size),
- min_chunk_size);
+ limit = max(
+ btrfs_zoned_rounddown(div_factor(fs_devices->total_rw_bytes, 1),
+ zone_size),
+ min_chunk_size);
ctl->max_chunk_size = min(limit, ctl->max_chunk_size);
ctl->dev_extent_min = zone_size * ctl->dev_stripes;
}
@@ -6729,7 +6730,8 @@ static void submit_stripe_bio(struct btrfs_io_context *bioc,
*/
if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
if (btrfs_dev_is_sequential(dev, physical)) {
- u64 zone_start = round_down(physical, fs_info->zone_size);
+ u64 zone_start = btrfs_zoned_rounddown(physical,
+ fs_info->zone_size);

bio->bi_iter.bi_sector = zone_start >> SECTOR_SHIFT;
} else {
@@ -8051,8 +8053,8 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info,
if (dev->zone_info) {
u64 zone_size = dev->zone_info->zone_size;

- if (!IS_ALIGNED(physical_offset, zone_size) ||
- !IS_ALIGNED(physical_len, zone_size)) {
+ if (!btrfs_zoned_is_aligned(physical_offset, zone_size) ||
+ !btrfs_zoned_is_aligned(physical_len, zone_size)) {
btrfs_err(fs_info,
"zoned: dev extent devid %llu physical offset %llu len %llu is not aligned to device zone",
devid, physical_offset, physical_len);
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 5be2ef7bb..3023c871e 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -177,13 +177,13 @@ static inline u32 sb_zone_number(struct block_device *bdev, int mirror)
static inline sector_t zone_start_sector(u32 zone_number,
struct block_device *bdev)
{
- return (sector_t)zone_number << ilog2(bdev_zone_sectors(bdev));
+ return zone_number * bdev_zone_sectors(bdev);
}

static inline u64 zone_start_physical(u32 zone_number,
struct btrfs_zoned_device_info *zone_info)
{
- return (u64)zone_number << zone_info->zone_size_shift;
+ return zone_number * zone_info->zone_size;
}

/*
@@ -236,8 +236,8 @@ static int btrfs_get_dev_zones(struct btrfs_device *device, u64 pos,
if (zinfo->zone_cache) {
unsigned int i;

- ASSERT(IS_ALIGNED(pos, zinfo->zone_size));
- zno = pos >> zinfo->zone_size_shift;
+ ASSERT(btrfs_zoned_is_aligned(pos, zinfo->zone_size));
+ zno = bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT);
/*
* We cannot report zones beyond the zone end. So, it is OK to
* cap *nr_zones to at the end.
@@ -409,9 +409,8 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache)
}

nr_sectors = bdev_nr_sectors(bdev);
- zone_info->zone_size_shift = ilog2(zone_info->zone_size);
- zone_info->nr_zones = nr_sectors >> ilog2(zone_sectors);
- if (!IS_ALIGNED(nr_sectors, zone_sectors))
+ zone_info->nr_zones = bdev_zone_no(bdev, nr_sectors);
+ if (!btrfs_zoned_is_aligned(nr_sectors, zone_sectors))
zone_info->nr_zones++;

max_active_zones = bdev_max_active_zones(bdev);
@@ -823,10 +822,8 @@ int btrfs_sb_log_location_bdev(struct block_device *bdev, int mirror, int rw,
u64 *bytenr_ret)
{
struct blk_zone zones[BTRFS_NR_SB_LOG_ZONES];
- sector_t zone_sectors;
u32 sb_zone;
int ret;
- u8 zone_sectors_shift;
sector_t nr_sectors;
u32 nr_zones;

@@ -837,12 +834,10 @@ int btrfs_sb_log_location_bdev(struct block_device *bdev, int mirror, int rw,

ASSERT(rw == READ || rw == WRITE);

- zone_sectors = bdev_zone_sectors(bdev);
- if (!is_power_of_2(zone_sectors))
+ if (!is_power_of_2(bdev_zone_sectors(bdev)))
return -EINVAL;
- zone_sectors_shift = ilog2(zone_sectors);
nr_sectors = bdev_nr_sectors(bdev);
- nr_zones = nr_sectors >> zone_sectors_shift;
+ nr_zones = bdev_zone_no(bdev, nr_sectors);

sb_zone = sb_zone_number(bdev, mirror);
if (sb_zone + 1 >= nr_zones)
@@ -959,14 +954,12 @@ int btrfs_reset_sb_log_zones(struct block_device *bdev, int mirror)
{
sector_t zone_sectors;
sector_t nr_sectors;
- u8 zone_sectors_shift;
u32 sb_zone;
u32 nr_zones;

zone_sectors = bdev_zone_sectors(bdev);
- zone_sectors_shift = ilog2(zone_sectors);
nr_sectors = bdev_nr_sectors(bdev);
- nr_zones = nr_sectors >> zone_sectors_shift;
+ nr_zones = bdev_zone_no(bdev, nr_sectors);

sb_zone = sb_zone_number(bdev, mirror);
if (sb_zone + 1 >= nr_zones)
@@ -992,18 +985,17 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
u64 hole_end, u64 num_bytes)
{
struct btrfs_zoned_device_info *zinfo = device->zone_info;
- const u8 shift = zinfo->zone_size_shift;
- u64 nzones = num_bytes >> shift;
+ u64 nzones = bdev_zone_no(device->bdev, num_bytes >> SECTOR_SHIFT);
u64 pos = hole_start;
u64 begin, end;
bool have_sb;
int i;

- ASSERT(IS_ALIGNED(hole_start, zinfo->zone_size));
- ASSERT(IS_ALIGNED(num_bytes, zinfo->zone_size));
+ ASSERT(btrfs_zoned_is_aligned(hole_start, zinfo->zone_size));
+ ASSERT(btrfs_zoned_is_aligned(num_bytes, zinfo->zone_size));

while (pos < hole_end) {
- begin = pos >> shift;
+ begin = bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT);
end = begin + nzones;

if (end > zinfo->nr_zones)
@@ -1035,8 +1027,9 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
if (!(pos + num_bytes <= sb_pos ||
sb_pos + BTRFS_SUPER_INFO_SIZE <= pos)) {
have_sb = true;
- pos = ALIGN(sb_pos + BTRFS_SUPER_INFO_SIZE,
- zinfo->zone_size);
+ pos = btrfs_zoned_roundup(
+ sb_pos + BTRFS_SUPER_INFO_SIZE,
+ zinfo->zone_size);
break;
}
}
@@ -1050,7 +1043,7 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
static bool btrfs_dev_set_active_zone(struct btrfs_device *device, u64 pos)
{
struct btrfs_zoned_device_info *zone_info = device->zone_info;
- unsigned int zno = (pos >> zone_info->zone_size_shift);
+ unsigned int zno = bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT);

/* We can use any number of zones */
if (zone_info->max_active_zones == 0)
@@ -1072,7 +1065,7 @@ static bool btrfs_dev_set_active_zone(struct btrfs_device *device, u64 pos)
static void btrfs_dev_clear_active_zone(struct btrfs_device *device, u64 pos)
{
struct btrfs_zoned_device_info *zone_info = device->zone_info;
- unsigned int zno = (pos >> zone_info->zone_size_shift);
+ unsigned int zno = bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT);

/* We can use any number of zones */
if (zone_info->max_active_zones == 0)
@@ -1108,14 +1101,14 @@ int btrfs_reset_device_zone(struct btrfs_device *device, u64 physical,
int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
{
struct btrfs_zoned_device_info *zinfo = device->zone_info;
- const u8 shift = zinfo->zone_size_shift;
- unsigned long begin = start >> shift;
- unsigned long end = (start + size) >> shift;
+ unsigned long begin = bdev_zone_no(device->bdev, start >> SECTOR_SHIFT);
+ unsigned long end =
+ bdev_zone_no(device->bdev, (start + size) >> SECTOR_SHIFT);
u64 pos;
int ret;

- ASSERT(IS_ALIGNED(start, zinfo->zone_size));
- ASSERT(IS_ALIGNED(size, zinfo->zone_size));
+ ASSERT(btrfs_zoned_is_aligned(start, zinfo->zone_size));
+ ASSERT(btrfs_zoned_is_aligned(size, zinfo->zone_size));

if (end > zinfo->nr_zones)
return -ERANGE;
@@ -1139,8 +1132,9 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
/* Free regions should be empty */
btrfs_warn_in_rcu(
device->fs_info,
- "zoned: resetting device %s (devid %llu) zone %llu for allocation",
- rcu_str_deref(device->name), device->devid, pos >> shift);
+ "zoned: resetting device %s (devid %llu) zone %u for allocation",
+ rcu_str_deref(device->name), device->devid,
+ bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT));
WARN_ON_ONCE(1);

ret = btrfs_reset_device_zone(device, pos, zinfo->zone_size,
@@ -1237,7 +1231,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
return 0;

/* Sanity check */
- if (!IS_ALIGNED(length, fs_info->zone_size)) {
+ if (!btrfs_zoned_is_aligned(length, fs_info->zone_size)) {
btrfs_err(fs_info,
"zoned: block group %llu len %llu unaligned to zone size %llu",
logical, length, fs_info->zone_size);
@@ -1325,7 +1319,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
* The group is mapped to a sequential zone. Get the zone write
* pointer to determine the allocation offset within the zone.
*/
- WARN_ON(!IS_ALIGNED(physical[i], fs_info->zone_size));
+ WARN_ON(!btrfs_zoned_is_aligned(physical[i], fs_info->zone_size));
nofs_flag = memalloc_nofs_save();
ret = btrfs_get_dev_zone(device, physical[i], &zone);
memalloc_nofs_restore(nofs_flag);
@@ -1351,10 +1345,12 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
switch (zone.cond) {
case BLK_ZONE_COND_OFFLINE:
case BLK_ZONE_COND_READONLY:
- btrfs_err(fs_info,
- "zoned: offline/readonly zone %llu on device %s (devid %llu)",
- physical[i] >> device->zone_info->zone_size_shift,
- rcu_str_deref(device->name), device->devid);
+ btrfs_err(
+ fs_info,
+ "zoned: offline/readonly zone %u on device %s (devid %llu)",
+ bdev_zone_no(device->bdev,
+ physical[i] >> SECTOR_SHIFT),
+ rcu_str_deref(device->name), device->devid);
alloc_offsets[i] = WP_MISSING_DEV;
break;
case BLK_ZONE_COND_EMPTY:
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index 694ab6d1e..b94ce4d1f 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -9,6 +9,7 @@
#include "disk-io.h"
#include "block-group.h"
#include "btrfs_inode.h"
+#include "misc.h"

#define BTRFS_DEFAULT_RECLAIM_THRESH (75)

@@ -18,7 +19,6 @@ struct btrfs_zoned_device_info {
* zoned block device.
*/
u64 zone_size;
- u8 zone_size_shift;
u32 nr_zones;
unsigned int max_active_zones;
atomic_t active_zones_left;
@@ -30,6 +30,36 @@ struct btrfs_zoned_device_info {
u32 sb_zone_location[BTRFS_SUPER_MIRROR_MAX];
};

+static inline bool btrfs_zoned_is_aligned(u64 pos, u64 zone_size)
+{
+ u64 remainder = 0;
+
+ if (is_power_of_two_u64(zone_size))
+ return IS_ALIGNED(pos, zone_size);
+
+ div64_u64_rem(pos, zone_size, &remainder);
+ return remainder == 0;
+}
+
+static inline u64 btrfs_zoned_roundup(u64 pos, u64 zone_size)
+{
+ if (is_power_of_two_u64(zone_size))
+ return ALIGN(pos, zone_size);
+
+ return div64_u64(pos + zone_size - 1, zone_size) * zone_size;
+}
+
+static inline u64 btrfs_zoned_rounddown(u64 pos, u64 zone_size)
+{
+ u64 remainder = 0;
+ if (is_power_of_two_u64(zone_size))
+ return round_down(pos, zone_size);
+
+ div64_u64_rem(pos, zone_size, &remainder);
+ pos -= remainder;
+ return pos;
+}
+
#ifdef CONFIG_BLK_DEV_ZONED
int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
struct blk_zone *zone);
@@ -253,7 +283,8 @@ static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
if (!zone_info)
return false;

- return test_bit(pos >> zone_info->zone_size_shift, zone_info->seq_zones);
+ return test_bit(bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT),
+ zone_info->seq_zones);
}

static inline bool btrfs_dev_is_empty_zone(struct btrfs_device *device, u64 pos)
@@ -263,7 +294,8 @@ static inline bool btrfs_dev_is_empty_zone(struct btrfs_device *device, u64 pos)
if (!zone_info)
return true;

- return test_bit(pos >> zone_info->zone_size_shift, zone_info->empty_zones);
+ return test_bit(bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT),
+ zone_info->empty_zones);
}

static inline void btrfs_dev_set_empty_zone_bit(struct btrfs_device *device,
@@ -275,7 +307,7 @@ static inline void btrfs_dev_set_empty_zone_bit(struct btrfs_device *device,
if (!zone_info)
return;

- zno = pos >> zone_info->zone_size_shift;
+ zno = bdev_zone_no(device->bdev, pos >> SECTOR_SHIFT);
if (set)
set_bit(zno, zone_info->empty_zones);
else
@@ -329,7 +361,8 @@ static inline bool btrfs_can_zone_reset(struct btrfs_device *device,
return false;

zone_size = device->zone_info->zone_size;
- if (!IS_ALIGNED(physical, zone_size) || !IS_ALIGNED(length, zone_size))
+ if (!btrfs_zoned_is_aligned(physical, zone_size) ||
+ !btrfs_zoned_is_aligned(length, zone_size))
return false;

return true;
--
2.25.1


2022-05-09 04:15:30

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 01/11] block: make blkdev_nr_zones and blk_queue_zone_no generic for npo2 zsze

Adapt blkdev_nr_zones and blk_queue_zone_no function so that it can
also work for non-power-of-2 zone sizes.

As the existing deployments of zoned devices had power-of-2
assumption, power-of-2 optimized calculation is kept for those devices.

There are no direct hot paths modified and the changes just
introduce one new branch per call.

Reviewed-by: Luis Chamberlain <[email protected]>
Reviewed by: Adam Manzanares <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
block/blk-zoned.c | 13 ++++++++++---
include/linux/blkdev.h | 8 +++++++-
2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 38cd840d8..140230134 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -111,16 +111,23 @@ EXPORT_SYMBOL_GPL(__blk_req_zone_write_unlock);
* blkdev_nr_zones - Get number of zones
* @disk: Target gendisk
*
- * Return the total number of zones of a zoned block device. For a block
- * device without zone capabilities, the number of zones is always 0.
+ * Return the total number of zones of a zoned block device, including the
+ * eventual small last zone if present. For a block device without zone
+ * capabilities, the number of zones is always 0.
*/
unsigned int blkdev_nr_zones(struct gendisk *disk)
{
sector_t zone_sectors = blk_queue_zone_sectors(disk->queue);
+ sector_t capacity = get_capacity(disk);

if (!blk_queue_is_zoned(disk->queue))
return 0;
- return (get_capacity(disk) + zone_sectors - 1) >> ilog2(zone_sectors);
+
+ if (is_power_of_2(zone_sectors))
+ return (capacity + zone_sectors - 1) >>
+ ilog2(zone_sectors);
+
+ return div64_u64(capacity + zone_sectors - 1, zone_sectors);
}
EXPORT_SYMBOL_GPL(blkdev_nr_zones);

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1b24c1fb3..22fe512ee 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -675,9 +675,15 @@ static inline unsigned int blk_queue_nr_zones(struct request_queue *q)
static inline unsigned int blk_queue_zone_no(struct request_queue *q,
sector_t sector)
{
+ sector_t zone_sectors = blk_queue_zone_sectors(q);
+
if (!blk_queue_is_zoned(q))
return 0;
- return sector >> ilog2(q->limits.chunk_sectors);
+
+ if (is_power_of_2(zone_sectors))
+ return sector >> ilog2(zone_sectors);
+
+ return div64_u64(sector, zone_sectors);
}

static inline bool blk_queue_zone_is_seq(struct request_queue *q,
--
2.25.1


2022-05-09 06:25:14

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 02/11] block: allow blk-zoned devices to have non-power-of-2 zone size

Checking if a given sector is aligned to a zone is a common
operation that is performed for zoned devices. Add
blk_queue_is_zone_start helper to check for this instead of opencoding it
everywhere.

Convert the calculations on zone size to be generic instead of relying on
power_of_2 based logic in the block layer using the helpers wherever
possible.

The only hot path affected by this change for power_of_2 zoned devices
is in blk_check_zone_append() but blk_queue_is_zone_start() helper is
used to optimize the calculation for po2 zone sizes. Note that the append
path cannot be accessed by direct raw access to the block device but only
through a filesystem abstraction.

Finally, allow non power of 2 zoned devices provided that their zone
capacity and zone size are equal. The main motivation to allow non
power_of_2 zoned device is to remove the unmapped LBA between zcap and
zsze for devices that cannot have a power_of_2 zcap.

Reviewed-by: Luis Chamberlain <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
block/blk-core.c | 3 +--
block/blk-zoned.c | 27 +++++++++++++++++++++------
include/linux/blkdev.h | 22 ++++++++++++++++++++++
3 files changed, 44 insertions(+), 8 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index f305cb66c..b7051b7ea 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -634,8 +634,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
return BLK_STS_NOTSUPP;

/* The bio sector must point to the start of a sequential zone */
- if (pos & (blk_queue_zone_sectors(q) - 1) ||
- !blk_queue_zone_is_seq(q, pos))
+ if (!blk_queue_is_zone_start(q, pos) || !blk_queue_zone_is_seq(q, pos))
return BLK_STS_IOERR;

/*
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 140230134..cfc2fb804 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -289,10 +289,10 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
return -EINVAL;

/* Check alignment (handle eventual smaller last zone) */
- if (sector & (zone_sectors - 1))
+ if (!blk_queue_is_zone_start(q, sector))
return -EINVAL;

- if ((nr_sectors & (zone_sectors - 1)) && end_sector != capacity)
+ if (!blk_queue_is_zone_start(q, nr_sectors) && end_sector != capacity)
return -EINVAL;

/*
@@ -490,14 +490,29 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx,
* smaller last zone.
*/
if (zone->start == 0) {
- if (zone->len == 0 || !is_power_of_2(zone->len)) {
- pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n",
- disk->disk_name, zone->len);
+ if (zone->len == 0) {
+ pr_warn("%s: Invalid zone size",
+ disk->disk_name);
+ return -ENODEV;
+ }
+
+ /*
+ * Don't allow zoned device with non power_of_2 zone size with
+ * zone capacity less than zone size.
+ */
+ if (!is_power_of_2(zone->len) &&
+ zone->capacity < zone->len) {
+ pr_warn("%s: Invalid zoned size with non power of 2 zone size and zone capacity < zone size",
+ disk->disk_name);
return -ENODEV;
}

args->zone_sectors = zone->len;
- args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len);
+ /*
+ * Division is used to calculate nr_zones for both power_of_2
+ * and non power_of_2 zone sizes as it is not in the hot path.
+ */
+ args->nr_zones = div64_u64(capacity + zone->len - 1, zone->len);
} else if (zone->start + args->zone_sectors < capacity) {
if (zone->len != args->zone_sectors) {
pr_warn("%s: Invalid zoned device with non constant zone size\n",
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 22fe512ee..32d7bd7b1 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -686,6 +686,22 @@ static inline unsigned int blk_queue_zone_no(struct request_queue *q,
return div64_u64(sector, zone_sectors);
}

+static inline bool blk_queue_is_zone_start(struct request_queue *q, sector_t sec)
+{
+ sector_t zone_sectors = blk_queue_zone_sectors(q);
+ u64 remainder = 0;
+
+ if (!blk_queue_is_zoned(q))
+ return false;
+
+ if (is_power_of_2(zone_sectors))
+ return IS_ALIGNED(sec, zone_sectors);
+
+ div64_u64_rem(sec, zone_sectors, &remainder);
+ /* if there is a remainder, then the sector is not aligned */
+ return remainder == 0;
+}
+
static inline bool blk_queue_zone_is_seq(struct request_queue *q,
sector_t sector)
{
@@ -732,6 +748,12 @@ static inline unsigned int blk_queue_zone_no(struct request_queue *q,
{
return 0;
}
+
+static inline bool blk_queue_is_zone_start(struct request_queue *q, sector_t sec)
+{
+ return false;
+}
+
static inline unsigned int queue_max_open_zones(const struct request_queue *q)
{
return 0;
--
2.25.1


2022-05-09 08:18:16

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 06/11] btrfs: zoned: Make sb_zone_number function non power of 2 compatible

Make the calculation in sb_zone_number function to be generic and work
for both power-of-2 and non power-of-2 zone sizes.

The function signature has been modified to take block device and mirror
as input as this function is only invoked from callers that have access
to the block device. This enables to use the generic bdev_zone_no
function provided by the block layer to calculate the zone number.

Even though division is used to calculate the zone index for non
power-of-2 zone sizes, this function will not be used in the fast path as
the sb_zone_location cache is used for the superblock zone location.

Reviewed-by: Luis Chamberlain <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
fs/btrfs/zoned.c | 25 +++++++++++++++----------
1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index e8c7cebb2..5be2ef7bb 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -34,9 +34,6 @@
#define BTRFS_SB_LOG_FIRST_OFFSET (512ULL * SZ_1G)
#define BTRFS_SB_LOG_SECOND_OFFSET (4096ULL * SZ_1G)

-#define BTRFS_SB_LOG_FIRST_SHIFT const_ilog2(BTRFS_SB_LOG_FIRST_OFFSET)
-#define BTRFS_SB_LOG_SECOND_SHIFT const_ilog2(BTRFS_SB_LOG_SECOND_OFFSET)
-
/* Number of superblock log zones */
#define BTRFS_NR_SB_LOG_ZONES 2

@@ -153,15 +150,23 @@ static int sb_write_pointer(struct block_device *bdev, struct blk_zone *zones,
/*
* Get the first zone number of the superblock mirror
*/
-static inline u32 sb_zone_number(int shift, int mirror)
+static inline u32 sb_zone_number(struct block_device *bdev, int mirror)
{
u64 zone;

ASSERT(mirror < BTRFS_SUPER_MIRROR_MAX);
switch (mirror) {
- case 0: zone = 0; break;
- case 1: zone = 1ULL << (BTRFS_SB_LOG_FIRST_SHIFT - shift); break;
- case 2: zone = 1ULL << (BTRFS_SB_LOG_SECOND_SHIFT - shift); break;
+ case 0:
+ zone = 0;
+ break;
+ case 1:
+ zone = bdev_zone_no(bdev,
+ BTRFS_SB_LOG_FIRST_OFFSET >> SECTOR_SHIFT);
+ break;
+ case 2:
+ zone = bdev_zone_no(bdev,
+ BTRFS_SB_LOG_SECOND_OFFSET >> SECTOR_SHIFT);
+ break;
}

ASSERT(zone <= U32_MAX);
@@ -514,7 +519,7 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache)
/* Cache the sb zone number */
for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; ++i) {
zone_info->sb_zone_location[i] =
- sb_zone_number(zone_info->zone_size_shift, i);
+ sb_zone_number(bdev, i);
}
/* Validate superblock log */
nr_zones = BTRFS_NR_SB_LOG_ZONES;
@@ -839,7 +844,7 @@ int btrfs_sb_log_location_bdev(struct block_device *bdev, int mirror, int rw,
nr_sectors = bdev_nr_sectors(bdev);
nr_zones = nr_sectors >> zone_sectors_shift;

- sb_zone = sb_zone_number(zone_sectors_shift + SECTOR_SHIFT, mirror);
+ sb_zone = sb_zone_number(bdev, mirror);
if (sb_zone + 1 >= nr_zones)
return -ENOENT;

@@ -963,7 +968,7 @@ int btrfs_reset_sb_log_zones(struct block_device *bdev, int mirror)
nr_sectors = bdev_nr_sectors(bdev);
nr_zones = nr_sectors >> zone_sectors_shift;

- sb_zone = sb_zone_number(zone_sectors_shift + SECTOR_SHIFT, mirror);
+ sb_zone = sb_zone_number(bdev, mirror);
if (sb_zone + 1 >= nr_zones)
return -ENOENT;

--
2.25.1


2022-05-09 09:10:31

by Pankaj Raghav

[permalink] [raw]
Subject: [PATCH v3 10/11] null_blk: allow non power of 2 zoned devices

Convert the power of 2 based calculation with zone size to be generic in
null_zone_no with optimization for power of 2 based zone sizes.

The nr_zones calculation in null_init_zoned_dev has been replaced with a
division without special handling for power of 2 based zone sizes as
this function is called only during the initialization and will not
invoked in the hot path.

Performance Measurement:

Device:
zone size = 128M, blocksize=4k

FIO cmd:

fio --name=zbc --filename=/dev/nullb0 --direct=1 --zonemode=zbd --size=23G
--io_size=<iosize> --ioengine=io_uring --iodepth=<iod> --rw=<mode> --bs=4k
--loops=4

The following results are an average of 4 runs on AMD Ryzen 5 5600X with
32GB of RAM:

Sequential Write:

x-----------------x---------------------------------x---------------------------------x
| IOdepth | 8 | 16 |
x-----------------x---------------------------------x---------------------------------x
| | KIOPS |BW(MiB/s) | Lat(usec) | KIOPS |BW(MiB/s) | Lat(usec) |
x-----------------x---------------------------------x---------------------------------x
| Without patch | 578 | 2257 | 12.80 | 576 | 2248 | 25.78 |
x-----------------x---------------------------------x---------------------------------x
| With patch | 581 | 2268 | 12.74 | 576 | 2248 | 25.85 |
x-----------------x---------------------------------x---------------------------------x

Sequential read:

x-----------------x---------------------------------x---------------------------------x
| IOdepth | 8 | 16 |
x-----------------x---------------------------------x---------------------------------x
| | KIOPS |BW(MiB/s) | Lat(usec) | KIOPS |BW(MiB/s) | Lat(usec) |
x-----------------x---------------------------------x---------------------------------x
| Without patch | 667 | 2605 | 11.79 | 675 | 2637 | 23.49 |
x-----------------x---------------------------------x---------------------------------x
| With patch | 667 | 2605 | 11.79 | 675 | 2638 | 23.48 |
x-----------------x---------------------------------x---------------------------------x

Random read:

x-----------------x---------------------------------x---------------------------------x
| IOdepth | 8 | 16 |
x-----------------x---------------------------------x---------------------------------x
| | KIOPS |BW(MiB/s) | Lat(usec) | KIOPS |BW(MiB/s) | Lat(usec) |
x-----------------x---------------------------------x---------------------------------x
| Without patch | 522 | 2038 | 15.05 | 514 | 2006 | 30.87 |
x-----------------x---------------------------------x---------------------------------x
| With patch | 522 | 2039 | 15.04 | 523 | 2042 | 30.33 |
x-----------------x---------------------------------x---------------------------------x

Minor variations are noticed in Sequential write with io depth 8 and
in random read with io depth 16. But overall no noticeable differences
were noticed

Reviewed-by: Luis Chamberlain <[email protected]>
Reviewed by: Adam Manzanares <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
drivers/block/null_blk/main.c | 5 ++---
drivers/block/null_blk/zoned.c | 14 +++++++-------
2 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 5cb4c92cd..ed9a58201 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1929,9 +1929,8 @@ static int null_validate_conf(struct nullb_device *dev)
if (dev->queue_mode == NULL_Q_BIO)
dev->mbps = 0;

- if (dev->zoned &&
- (!dev->zone_size || !is_power_of_2(dev->zone_size))) {
- pr_err("zone_size must be power-of-two\n");
+ if (dev->zoned && !dev->zone_size) {
+ pr_err("zone_size must not be zero\n");
return -EINVAL;
}

diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index dae54dd1a..00c34e65e 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -13,7 +13,10 @@ static inline sector_t mb_to_sects(unsigned long mb)

static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
{
- return sect >> ilog2(dev->zone_size_sects);
+ if (is_power_of_2(dev->zone_size_sects))
+ return sect >> ilog2(dev->zone_size_sects);
+
+ return div64_u64(sect, dev->zone_size_sects);
}

static inline void null_lock_zone_res(struct nullb_device *dev)
@@ -62,10 +65,6 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
sector_t sector = 0;
unsigned int i;

- if (!is_power_of_2(dev->zone_size)) {
- pr_err("zone_size must be power-of-two\n");
- return -EINVAL;
- }
if (dev->zone_size > dev->size) {
pr_err("Zone size larger than device capacity\n");
return -EINVAL;
@@ -83,8 +82,9 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
zone_capacity_sects = mb_to_sects(dev->zone_capacity);
dev_capacity_sects = mb_to_sects(dev->size);
dev->zone_size_sects = mb_to_sects(dev->zone_size);
- dev->nr_zones = round_up(dev_capacity_sects, dev->zone_size_sects)
- >> ilog2(dev->zone_size_sects);
+ dev->nr_zones =
+ div64_u64(roundup(dev_capacity_sects, dev->zone_size_sects),
+ dev->zone_size_sects);

dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct nullb_zone),
GFP_KERNEL | __GFP_ZERO);
--
2.25.1


2022-05-09 11:53:02

by Pankaj Raghav

[permalink] [raw]
Subject: Re: [PATCH v3 00/11] support non power of 2 zoned devices

On 2022-05-06 12:00, David Sterba wrote:
>> The current approach for npo2 devices is to place the superblock mirror
>> zones near 512GB and 4TB that is **aligned to the zone size**.
>
> I don't like that, the offsets have been chosen so the values are fixed
> and also future proof in case the zone size increases significantly. The
> natural alignment of the pow2 zones makes it fairly trivial.
>
> If I understand correctly what you suggest, it would mean that if zone
> is eg. 5G and starts at 510G then the superblock should start at 510G,
> right? And with another device that has 7G zone size the nearest
> multiple is 511G. And so on.
>
> That makes it all less predictable, depending on the physical device
> constraints that are affecting the logical data structures of the
> filesystem. We tried to avoid that with pow2, the only thing that
> depends on the device is that the range from the super block offsets is
> always 2 zones.
>
> I really want to keep the offsets for all zoned devices the same and
> adapt the code that's handling the writes. This is possible with the
> non-pow2 too, the first write is set to the expected offset, leaving the
> beginning of the zone unused.
>
I agree. Having a known place for superblocks is important for recovery
tools. We were thinking along the lines of what you have suggested. I
will add this support in the next revision.
>> This
>> is of no issue for normal operation as we keep track where the superblock
>> mirror are placed but this can cause an issue with recovery tools for
>> zoned devices as they expect mirror superblock to be in 512GB and 4TB.
>
> Yeah the tools need to be updated, btrfs-progs and suite of blk* in
> util-linux.
>
>> Note that ATM, recovery tools such as `btrfs check` does not work for
>> image dumps for zoned devices even for po2 zone sizes.
>
> I thought this worked, but if you find something that does not please
> report that to Johannes or Naohiro.
Ok. Thanks.