2024-01-24 11:46:00

by John Garry

[permalink] [raw]
Subject: [PATCH v3 00/15] block atomic writes

This series introduces a proposal to implementing atomic writes in the
kernel for torn-write protection.

This series takes the approach of adding a new "atomic" flag to each of
pwritev2() and iocb->ki_flags - RWF_ATOMIC and IOCB_ATOMIC, respectively.
When set, these indicate that we want the write issued "atomically".

Only direct IO is supported and for block devices here. For this, atomic
write HW is required, like SCSI ATOMIC WRITE (16).

I plan to send a series for supporting atomic writes for XFS later this
week, but initially only for XFS rtvol.

Updated man pages have been posted at:
https://lore.kernel.org/lkml/[email protected]/T/#m520dca97a9748de352b5a723d3155a4bb1e46456

The goal here is to provide an interface that allows applications use
application-specific block sizes larger than logical block size
reported by the storage device or larger than filesystem block size as
reported by stat().

With this new interface, application blocks will never be torn or
fractured when written. For a power fail, for each individual application
block, all or none of the data to be written. A racing atomic write and
read will mean that the read sees all the old data or all the new data,
but never a mix of old and new.

Three new fields are added to struct statx - atomic_write_unit_min,
atomic_write_unit_max, and atomic_write_segments_max. For each atomic
individual write, the total length of a write must be a between
atomic_write_unit_min and atomic_write_unit_max, inclusive, and a
power-of-2. The write must also be at a natural offset in the file
wrt the write length. For pwritev2, iovcnt is limited by
atomic_write_segments_max.

SCSI sd.c and scsi_debug and NVMe kernel support is added.

This series is based on v6.8-rc1.

Changes since v2:
- Support atomic_write_segments_max
- Limit atomic write paramaters to max_hw_sectors_kb
- Don't increase fmode_t
- Change value for RWF_ATOMIC
- Various tidying (including advised by Jan)

Changes since v1:
- Drop XFS support for now
- Tidy NVMe changes and also add checks for atomic write violating max
AW PF length and boundary (if any)
- Reject - instead of ignoring - RWF_ATOMIC for files which do not
support atomic writes
- Update block sysfs documentation
- Various tidy-ups

Alan Adamson (2):
nvme: Support atomic writes
nvme: Ensure atomic writes will be executed atomically

Himanshu Madhani (2):
block: Add atomic write operations to request_queue limits
block: Add REQ_ATOMIC flag

John Garry (9):
block: Limit atomic writes according to bio and queue limits
block: Pass blk_queue_get_max_sectors() a request pointer
block: Limit atomic write IO size according to
atomic_write_max_sectors
block: Error an attempt to split an atomic write bio
block: Add checks to merging of atomic writes
block: Add fops atomic write support
scsi: sd: Support reading atomic write properties from block limits
VPD
scsi: sd: Add WRITE_ATOMIC_16 support
scsi: scsi_debug: Atomic write support

Prasad Singamsetty (2):
fs/bdev: Add atomic write support info to statx
fs: Add RWF_ATOMIC and IOCB_ATOMIC flags for atomic write support

Documentation/ABI/stable/sysfs-block | 52 +++
block/bdev.c | 37 +-
block/blk-merge.c | 94 ++++-
block/blk-mq.c | 2 +-
block/blk-settings.c | 103 +++++
block/blk-sysfs.c | 33 ++
block/blk.h | 9 +-
block/fops.c | 44 +-
drivers/nvme/host/core.c | 71 ++++
drivers/nvme/host/nvme.h | 2 +
drivers/scsi/scsi_debug.c | 589 +++++++++++++++++++++------
drivers/scsi/scsi_trace.c | 22 +
drivers/scsi/sd.c | 93 ++++-
drivers/scsi/sd.h | 8 +
fs/stat.c | 47 ++-
include/linux/blk_types.h | 2 +
include/linux/blkdev.h | 45 +-
include/linux/fs.h | 12 +
include/linux/stat.h | 3 +
include/scsi/scsi_proto.h | 1 +
include/trace/events/scsi.h | 1 +
include/uapi/linux/fs.h | 5 +-
include/uapi/linux/stat.h | 9 +-
23 files changed, 1123 insertions(+), 161 deletions(-)

--
2.31.1



2024-01-24 11:46:13

by John Garry

[permalink] [raw]
Subject: [PATCH v3 03/15] fs/bdev: Add atomic write support info to statx

From: Prasad Singamsetty <[email protected]>

Extend statx system call to return additional info for atomic write support
support if the specified file is a block device.

Signed-off-by: Prasad Singamsetty <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
block/bdev.c | 37 +++++++++++++++++++++---------
fs/stat.c | 47 +++++++++++++++++++++++++++++++++------
include/linux/blkdev.h | 5 +++--
include/linux/fs.h | 3 +++
include/linux/stat.h | 3 +++
include/uapi/linux/stat.h | 9 +++++++-
6 files changed, 84 insertions(+), 20 deletions(-)

diff --git a/block/bdev.c b/block/bdev.c
index e9f1b12bd75c..2185084ffc89 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -1116,24 +1116,41 @@ void sync_bdevs(bool wait)
iput(old_inode);
}

+#define BDEV_STATX_SUPPORTED_MASK (STATX_DIOALIGN | STATX_WRITE_ATOMIC)
+
/*
- * Handle STATX_DIOALIGN for block devices.
- *
- * Note that the inode passed to this is the inode of a block device node file,
- * not the block device's internal inode. Therefore it is *not* valid to use
- * I_BDEV() here; the block device has to be looked up by i_rdev instead.
+ * Handle STATX_{DIOALIGN, WRITE_ATOMIC} for block devices.
*/
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask)
{
struct block_device *bdev;

- bdev = blkdev_get_no_open(inode->i_rdev);
+ if (!(request_mask & BDEV_STATX_SUPPORTED_MASK))
+ return;
+
+ /*
+ * Note that d_backing_inode() returns the inode of a block device node
+ * file, not the block device's internal inode. Therefore it is *not*
+ * valid to use I_BDEV() here; the block device has to be looked up by
+ * i_rdev instead.
+ */
+ bdev = blkdev_get_no_open(d_backing_inode(dentry)->i_rdev);
if (!bdev)
return;

- stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
- stat->dio_offset_align = bdev_logical_block_size(bdev);
- stat->result_mask |= STATX_DIOALIGN;
+ if (request_mask & STATX_DIOALIGN) {
+ stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
+ stat->dio_offset_align = bdev_logical_block_size(bdev);
+ stat->result_mask |= STATX_DIOALIGN;
+ }
+
+ if (request_mask & STATX_WRITE_ATOMIC) {
+ struct request_queue *bd_queue = bdev->bd_queue;
+
+ generic_fill_statx_atomic_writes(stat,
+ queue_atomic_write_unit_min_bytes(bd_queue),
+ queue_atomic_write_unit_max_bytes(bd_queue));
+ }

blkdev_put_no_open(bdev);
}
diff --git a/fs/stat.c b/fs/stat.c
index 77cdc69eb422..bd0618477702 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -89,6 +89,37 @@ void generic_fill_statx_attr(struct inode *inode, struct kstat *stat)
}
EXPORT_SYMBOL(generic_fill_statx_attr);

+/**
+ * generic_fill_statx_atomic_writes - Fill in the atomic writes statx attributes
+ * @stat: Where to fill in the attribute flags
+ * @unit_min: Minimum supported atomic write length
+ * @unit_max: Maximum supported atomic write length
+ *
+ * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from
+ * atomic write unit_min and unit_max values.
+ */
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+ unsigned int unit_min,
+ unsigned int unit_max)
+{
+ /* Confirm that the request type is known */
+ stat->result_mask |= STATX_WRITE_ATOMIC;
+
+ /* Confirm that the file attribute type is known */
+ stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC;
+
+ if (unit_min) {
+ stat->atomic_write_unit_min = unit_min;
+ stat->atomic_write_unit_max = unit_max;
+ /* Initially only allow 1x segment */
+ stat->atomic_write_segments_max = 1;
+
+ /* Confirm atomic writes are actually supported */
+ stat->attributes |= STATX_ATTR_WRITE_ATOMIC;
+ }
+}
+EXPORT_SYMBOL(generic_fill_statx_atomic_writes);
+
/**
* vfs_getattr_nosec - getattr without security checks
* @path: file to get attributes from
@@ -259,13 +290,12 @@ static int vfs_statx(int dfd, struct filename *filename, int flags,
stat->attributes |= STATX_ATTR_MOUNT_ROOT;
stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT;

- /* Handle STATX_DIOALIGN for block devices. */
- if (request_mask & STATX_DIOALIGN) {
- struct inode *inode = d_backing_inode(path.dentry);
-
- if (S_ISBLK(inode->i_mode))
- bdev_statx_dioalign(inode, stat);
- }
+ /* If this is a block device inode, override the filesystem
+ * attributes with the block device specific parameters
+ * that need to be obtained from the bdev backing inode
+ */
+ if (S_ISBLK(d_backing_inode(path.dentry)->i_mode))
+ bdev_statx(path.dentry, stat, request_mask);

path_put(&path);
if (retry_estale(error, lookup_flags)) {
@@ -658,6 +688,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer)
tmp.stx_mnt_id = stat->mnt_id;
tmp.stx_dio_mem_align = stat->dio_mem_align;
tmp.stx_dio_offset_align = stat->dio_offset_align;
+ tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min;
+ tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max;
+ tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;

return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0;
}
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index d5490b988918..443f08239308 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1541,7 +1541,7 @@ int sync_blockdev(struct block_device *bdev);
int sync_blockdev_range(struct block_device *bdev, loff_t lstart, loff_t lend);
int sync_blockdev_nowait(struct block_device *bdev);
void sync_bdevs(bool wait);
-void bdev_statx_dioalign(struct inode *inode, struct kstat *stat);
+void bdev_statx(struct dentry *dentry, struct kstat *stat, u32 request_mask);
void printk_all_partitions(void);
int __init early_lookup_bdev(const char *pathname, dev_t *dev);
#else
@@ -1559,7 +1559,8 @@ static inline int sync_blockdev_nowait(struct block_device *bdev)
static inline void sync_bdevs(bool wait)
{
}
-static inline void bdev_statx_dioalign(struct inode *inode, struct kstat *stat)
+static inline void bdev_statx(struct dentry *dentry, struct kstat *stat,
+ u32 request_mask)
{
}
static inline void printk_all_partitions(void)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ed5966a70495..c316c0a92fff 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3164,6 +3164,9 @@ extern const struct inode_operations page_symlink_inode_operations;
extern void kfree_link(void *);
void generic_fillattr(struct mnt_idmap *, u32, struct inode *, struct kstat *);
void generic_fill_statx_attr(struct inode *inode, struct kstat *stat);
+void generic_fill_statx_atomic_writes(struct kstat *stat,
+ unsigned int unit_min,
+ unsigned int unit_max);
extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int);
extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int);
void __inode_add_bytes(struct inode *inode, loff_t bytes);
diff --git a/include/linux/stat.h b/include/linux/stat.h
index 52150570d37a..2c5e2b8c6559 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -53,6 +53,9 @@ struct kstat {
u32 dio_mem_align;
u32 dio_offset_align;
u64 change_cookie;
+ u32 atomic_write_unit_min;
+ u32 atomic_write_unit_max;
+ u32 atomic_write_segments_max;
};

/* These definitions are internal to the kernel for now. Mainly used by nfsd. */
diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h
index 2f2ee82d5517..c0e8e10d1de6 100644
--- a/include/uapi/linux/stat.h
+++ b/include/uapi/linux/stat.h
@@ -127,7 +127,12 @@ struct statx {
__u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */
__u32 stx_dio_offset_align; /* File offset alignment for direct I/O */
/* 0xa0 */
- __u64 __spare3[12]; /* Spare space for future expansion */
+ __u32 stx_atomic_write_unit_min;
+ __u32 stx_atomic_write_unit_max;
+ __u32 stx_atomic_write_segments_max;
+ __u32 __spare1;
+ /* 0xb0 */
+ __u64 __spare3[10]; /* Spare space for future expansion */
/* 0x100 */
};

@@ -155,6 +160,7 @@ struct statx {
#define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */
#define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */
#define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */
+#define STATX_WRITE_ATOMIC 0x00008000U /* Want/got atomic_write_* fields */

#define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */

@@ -190,6 +196,7 @@ struct statx {
#define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */
#define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */
#define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */
+#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */


#endif /* _UAPI_LINUX_STAT_H */
--
2.31.1


2024-01-24 11:47:07

by John Garry

[permalink] [raw]
Subject: [PATCH v3 14/15] nvme: Support atomic writes

From: Alan Adamson <[email protected]>

Support reading atomic write registers to fill in request_queue
properties.

Use following method to calculate limits:
atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
atomic_write_unit_min = logical_block_size
atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
atomic_write_boundary = NABSPF

Signed-off-by: Alan Adamson <[email protected]>
#jpg: some rewrite
Signed-off-by: John Garry <[email protected]>
---
drivers/nvme/host/core.c | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 85ab0fcf9e88..5045c84f2516 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1911,6 +1911,44 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
blk_queue_write_cache(q, vwc, vwc);
}

+static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
+ struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
+{
+ unsigned int unit_min = 0, unit_max = 0, boundary = 0, max_bytes = 0;
+ struct request_queue *q = disk->queue;
+
+ if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) {
+ if (le16_to_cpu(id->nabspf))
+ boundary = (le16_to_cpu(id->nabspf) + 1) * bs;
+
+ /*
+ * The boundary size just needs to be a multiple of unit_max
+ * (and not necessarily a power-of-2), so this could be relaxed
+ * in the block layer in future.
+ * Furthermore, if needed, unit_max could be reduced so that the
+ * boundary size was compliant - but don't support yet.
+ */
+ if (!boundary || is_power_of_2(boundary)) {
+ max_bytes = atomic_bs;
+ unit_min = bs;
+ unit_max = rounddown_pow_of_two(atomic_bs);
+ } else {
+ dev_notice(ctrl->device, "Unsupported atomic write boundary (%d)\n",
+ boundary);
+ boundary = 0;
+ }
+ } else if (ctrl->subsys->awupf) {
+ max_bytes = atomic_bs;
+ unit_min = bs;
+ unit_max = rounddown_pow_of_two(atomic_bs);
+ }
+
+ blk_queue_atomic_write_max_bytes(q, max_bytes);
+ blk_queue_atomic_write_unit_min_sectors(q, unit_min >> SECTOR_SHIFT);
+ blk_queue_atomic_write_unit_max_sectors(q, unit_max >> SECTOR_SHIFT);
+ blk_queue_atomic_write_boundary_bytes(q, boundary);
+}
+
static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
struct nvme_ns_head *head, struct nvme_id_ns *id)
{
@@ -1941,6 +1979,8 @@ static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
else
atomic_bs = (1 + ctrl->subsys->awupf) * bs;
+
+ nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);
}

if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
--
2.31.1


2024-01-24 11:47:37

by John Garry

[permalink] [raw]
Subject: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

From: Alan Adamson <[email protected]>

There is no dedicated NVMe atomic write command (which may error for a
command which exceeds the controller atomic write limits).

As an insurance policy against the block layer sending requests which
cannot be executed atomically, add a check in the queue path.

Signed-off-by: Alan Adamson <[email protected]>
Signed-off-by: John Garry <[email protected]>
---
drivers/nvme/host/core.c | 35 +++++++++++++++++++++++++++++++++--
drivers/nvme/host/nvme.h | 2 ++
2 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5045c84f2516..6a34a5d92088 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
if (req->cmd_flags & REQ_RAHEAD)
dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;

+ /*
+ * Ensure that nothing has been sent which cannot be executed
+ * atomically.
+ */
+ if (req->cmd_flags & REQ_ATOMIC) {
+ struct nvme_ns_head *head = ns->head;
+ u32 boundary_bytes = head->atomic_boundary;
+
+ if (blk_rq_bytes(req) > ns->head->atomic_max)
+ return BLK_STS_IOERR;
+
+ if (boundary_bytes) {
+ u32 mask = boundary_bytes - 1, imask = ~mask;
+ u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
+ u32 end = start + blk_rq_bytes(req);
+
+ if (blk_rq_bytes(req) > boundary_bytes)
+ return BLK_STS_IOERR;
+
+ if (((start & imask) != (end & imask)) &&
+ (end & mask)) {
+ return BLK_STS_IOERR;
+ }
+ }
+ }
+
cmnd->rw.opcode = op;
cmnd->rw.flags = 0;
cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id);
@@ -1912,7 +1938,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
}

static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
- struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
+ struct nvme_ctrl *ctrl, struct nvme_ns_head *head,
+ struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
{
unsigned int unit_min = 0, unit_max = 0, boundary = 0, max_bytes = 0;
struct request_queue *q = disk->queue;
@@ -1943,6 +1970,9 @@ static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
unit_max = rounddown_pow_of_two(atomic_bs);
}

+ head->atomic_max = max_bytes;
+ head->atomic_boundary = boundary;
+
blk_queue_atomic_write_max_bytes(q, max_bytes);
blk_queue_atomic_write_unit_min_sectors(q, unit_min >> SECTOR_SHIFT);
blk_queue_atomic_write_unit_max_sectors(q, unit_max >> SECTOR_SHIFT);
@@ -1980,7 +2010,8 @@ static void nvme_update_disk_info(struct nvme_ctrl *ctrl, struct gendisk *disk,
else
atomic_bs = (1 + ctrl->subsys->awupf) * bs;

- nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);
+ nvme_update_atomic_write_disk_info(disk, ctrl, head, id,
+ bs, atomic_bs);
}

if (id->nsfeat & NVME_NS_FEAT_IO_OPT) {
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 030c80818240..938060c85e6f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -476,6 +476,8 @@ struct nvme_ns_head {
struct device cdev_device;

struct gendisk *disk;
+ u32 atomic_max;
+ u32 atomic_boundary;
#ifdef CONFIG_NVME_MULTIPATH
struct bio_list requeue_list;
spinlock_t requeue_lock;
--
2.31.1


2024-01-24 11:57:26

by John Garry

[permalink] [raw]
Subject: [PATCH v3 13/15] scsi: scsi_debug: Atomic write support

Add initial support for atomic writes.

As is standard method, feed device properties via modules param, those
being:
- atomic_max_size_blks
- atomic_alignment_blks
- atomic_granularity_blks
- atomic_max_size_with_boundary_blks
- atomic_max_boundary_blks

These just match sbc4r22 section 6.6.4 - Block limits VPD page.

We just support ATOMIC_WRITE_16.

The major change in the driver is how we lock the device for RW accesses.

Currently the driver uses a per-device lock for accessing device metadata
and "media" data (calls to do_device_access()) atomically for the duration
of the whole read/write command.

This should not suit verifying atomic writes. Reason being that currently
all reads/writes are atomic, so using atomic writes does not prove
anything.

Change device access model to basis that regular writes only atomic on a
per-sector basis, while reads and atomic writes are fully atomic.

As mentioned, since accessing metadata and device media is atomic,
continue to have regular writes involving metadata - like discard or PI -
as atomic. We can improve this later.

Currently we only support model where overlapping going reads or writes
wait for current access to complete before commencing an atomic write.
This is described in 4.29.3.2 section of the SBC. However, we simplify,
things and wait for all accesses to complete (when issuing an atomic
write).

Signed-off-by: John Garry <[email protected]>
---
drivers/scsi/scsi_debug.c | 589 +++++++++++++++++++++++++++++---------
1 file changed, 456 insertions(+), 133 deletions(-)

diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 9070c0dc05ef..9568bcbf0821 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -68,6 +68,8 @@ static const char *sdebug_version_date = "20210520";

/* Additional Sense Code (ASC) */
#define NO_ADDITIONAL_SENSE 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASC 0x0
+#define OVERLAP_ATOMIC_COMMAND_ASCQ 0x23
#define LOGICAL_UNIT_NOT_READY 0x4
#define LOGICAL_UNIT_COMMUNICATION_FAILURE 0x8
#define UNRECOVERED_READ_ERR 0x11
@@ -102,6 +104,7 @@ static const char *sdebug_version_date = "20210520";
#define READ_BOUNDARY_ASCQ 0x7
#define ATTEMPT_ACCESS_GAP 0x9
#define INSUFF_ZONE_ASCQ 0xe
+/* see drivers/scsi/sense_codes.h */

/* Additional Sense Code Qualifier (ASCQ) */
#define ACK_NAK_TO 0x3
@@ -151,6 +154,12 @@ static const char *sdebug_version_date = "20210520";
#define DEF_VIRTUAL_GB 0
#define DEF_VPD_USE_HOSTNO 1
#define DEF_WRITESAME_LENGTH 0xFFFF
+#define DEF_ATOMIC_WR 0
+#define DEF_ATOMIC_WR_MAX_LENGTH 8192
+#define DEF_ATOMIC_WR_ALIGN 2
+#define DEF_ATOMIC_WR_GRAN 2
+#define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH)
+#define DEF_ATOMIC_WR_MAX_BNDRY 128
#define DEF_STRICT 0
#define DEF_STATISTICS false
#define DEF_SUBMIT_QUEUES 1
@@ -373,7 +382,9 @@ struct sdebug_host_info {

/* There is an xarray of pointers to this struct's objects, one per host */
struct sdeb_store_info {
- rwlock_t macc_lck; /* for atomic media access on this store */
+ rwlock_t macc_data_lck; /* for media data access on this store */
+ rwlock_t macc_meta_lck; /* for atomic media meta access on this store */
+ rwlock_t macc_sector_lck; /* per-sector media data access on this store */
u8 *storep; /* user data storage (ram) */
struct t10_pi_tuple *dif_storep; /* protection info */
void *map_storep; /* provisioning map */
@@ -397,12 +408,20 @@ struct sdebug_defer {
enum sdeb_defer_type defer_t;
};

+struct sdebug_device_access_info {
+ bool atomic_write;
+ u64 lba;
+ u32 num;
+ struct scsi_cmnd *self;
+};
+
struct sdebug_queued_cmd {
/* corresponding bit set in in_use_bm[] in owning struct sdebug_queue
* instance indicates this slot is in use.
*/
struct sdebug_defer sd_dp;
struct scsi_cmnd *scmd;
+ struct sdebug_device_access_info *i;
};

struct sdebug_scsi_cmd {
@@ -462,7 +481,8 @@ enum sdeb_opcode_index {
SDEB_I_PRE_FETCH = 29, /* 10, 16 */
SDEB_I_ZONE_OUT = 30, /* 0x94+SA; includes no data xfer */
SDEB_I_ZONE_IN = 31, /* 0x95+SA; all have data-in */
- SDEB_I_LAST_ELEM_P1 = 32, /* keep this last (previous + 1) */
+ SDEB_I_ATOMIC_WRITE_16 = 32,
+ SDEB_I_LAST_ELEM_P1 = 33, /* keep this last (previous + 1) */
};


@@ -496,7 +516,8 @@ static const unsigned char opcode_ind_arr[256] = {
0, 0, 0, SDEB_I_VERIFY,
SDEB_I_PRE_FETCH, SDEB_I_SYNC_CACHE, 0, SDEB_I_WRITE_SAME,
SDEB_I_ZONE_OUT, SDEB_I_ZONE_IN, 0, 0,
- 0, 0, 0, 0, 0, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
+ 0, 0, 0, 0,
+ SDEB_I_ATOMIC_WRITE_16, 0, SDEB_I_SERV_ACT_IN_16, SDEB_I_SERV_ACT_OUT_16,
/* 0xa0; 0xa0->0xbf: 12 byte cdbs */
SDEB_I_REPORT_LUNS, SDEB_I_ATA_PT, 0, SDEB_I_MAINT_IN,
SDEB_I_MAINT_OUT, 0, 0, 0,
@@ -544,6 +565,7 @@ static int resp_write_buffer(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_sync_cache(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_pre_fetch(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_report_zones(struct scsi_cmnd *, struct sdebug_dev_info *);
+static int resp_atomic_write(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_open_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_close_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
static int resp_finish_zone(struct scsi_cmnd *, struct sdebug_dev_info *);
@@ -782,6 +804,11 @@ static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */
{16, 0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} },
+/* 31 */
+ {0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO,
+ resp_atomic_write, NULL, /* ATOMIC WRITE 16 */
+ {16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} },
/* sentinel */
{0xff, 0, 0, 0, NULL, NULL, /* terminating element */
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
@@ -829,6 +856,13 @@ static unsigned int sdebug_unmap_granularity = DEF_UNMAP_GRANULARITY;
static unsigned int sdebug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS;
static unsigned int sdebug_unmap_max_desc = DEF_UNMAP_MAX_DESC;
static unsigned int sdebug_write_same_length = DEF_WRITESAME_LENGTH;
+static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR;
+static unsigned int sdebug_atomic_wr_max_length = DEF_ATOMIC_WR_MAX_LENGTH;
+static unsigned int sdebug_atomic_wr_align = DEF_ATOMIC_WR_ALIGN;
+static unsigned int sdebug_atomic_wr_gran = DEF_ATOMIC_WR_GRAN;
+static unsigned int sdebug_atomic_wr_max_length_bndry =
+ DEF_ATOMIC_WR_MAX_LENGTH_BNDRY;
+static unsigned int sdebug_atomic_wr_max_bndry = DEF_ATOMIC_WR_MAX_BNDRY;
static int sdebug_uuid_ctl = DEF_UUID_CTL;
static bool sdebug_random = DEF_RANDOM;
static bool sdebug_per_host_store = DEF_PER_HOST_STORE;
@@ -1180,6 +1214,11 @@ static inline bool scsi_debug_lbp(void)
(sdebug_lbpu || sdebug_lbpws || sdebug_lbpws10);
}

+static inline bool scsi_debug_atomic_write(void)
+{
+ return 0 == sdebug_fake_rw && sdebug_atomic_wr;
+}
+
static void *lba2fake_store(struct sdeb_store_info *sip,
unsigned long long lba)
{
@@ -1807,6 +1846,14 @@ static int inquiry_vpd_b0(unsigned char *arr)
/* Maximum WRITE SAME Length */
put_unaligned_be64(sdebug_write_same_length, &arr[32]);

+ if (sdebug_atomic_wr) {
+ put_unaligned_be32(sdebug_atomic_wr_max_length, &arr[40]);
+ put_unaligned_be32(sdebug_atomic_wr_align, &arr[44]);
+ put_unaligned_be32(sdebug_atomic_wr_gran, &arr[48]);
+ put_unaligned_be32(sdebug_atomic_wr_max_length_bndry, &arr[52]);
+ put_unaligned_be32(sdebug_atomic_wr_max_bndry, &arr[56]);
+ }
+
return 0x3c; /* Mandatory page length for Logical Block Provisioning */
}

@@ -3304,15 +3351,239 @@ static inline struct sdeb_store_info *devip2sip(struct sdebug_dev_info *devip,
return xa_load(per_store_ap, devip->sdbg_host->si_idx);
}

+
+static inline void
+sdeb_read_lock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __acquire(lock);
+ else
+ read_lock(lock);
+}
+
+static inline void
+sdeb_read_unlock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __release(lock);
+ else
+ read_unlock(lock);
+}
+
+static inline void
+sdeb_write_lock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __acquire(lock);
+ else
+ write_lock(lock);
+}
+
+static inline void
+sdeb_write_unlock(rwlock_t *lock)
+{
+ if (sdebug_no_rwlock)
+ __release(lock);
+ else
+ write_unlock(lock);
+}
+
+static inline void
+sdeb_data_read_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_read_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_lock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_write_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_unlock(&sip->macc_data_lck);
+}
+
+static inline void
+sdeb_data_sector_read_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_read_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_read_unlock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_lock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_lock(&sip->macc_sector_lck);
+}
+
+static inline void
+sdeb_data_sector_write_unlock(struct sdeb_store_info *sip)
+{
+ BUG_ON(!sip);
+
+ sdeb_write_unlock(&sip->macc_sector_lck);
+}
+
+/*
+ * Atomic locking:
+ * We simplify the atomic model to allow only 1x atomic write and many non-
+ * atomic reads or writes for all LBAs.
+
+ * A RW lock has a similar bahaviour:
+ * Only 1x writer and many readers.
+
+ * So use a RW lock for per-device read and write locking:
+ * An atomic access grabs the lock as a writer and non-atomic grabs the lock
+ * as a reader.
+ */
+
+static inline void
+sdeb_data_lock(struct sdeb_store_info *sip, bool atomic_write)
+{
+ if (atomic_write)
+ sdeb_data_write_lock(sip);
+ else
+ sdeb_data_read_lock(sip);
+}
+
+static inline void
+sdeb_data_unlock(struct sdeb_store_info *sip, bool atomic_write)
+{
+ if (atomic_write)
+ sdeb_data_write_unlock(sip);
+ else
+ sdeb_data_read_unlock(sip);
+}
+
+/* Allow many reads but only 1x write per sector */
+static inline void
+sdeb_data_sector_lock(struct sdeb_store_info *sip, bool do_write)
+{
+ if (do_write)
+ sdeb_data_sector_write_lock(sip);
+ else
+ sdeb_data_sector_read_lock(sip);
+}
+
+static inline void
+sdeb_data_sector_unlock(struct sdeb_store_info *sip, bool do_write)
+{
+ if (do_write)
+ sdeb_data_sector_write_unlock(sip);
+ else
+ sdeb_data_sector_read_unlock(sip);
+}
+
+static inline void
+sdeb_meta_read_lock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __acquire(&sip->macc_meta_lck);
+ else
+ __acquire(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ read_lock(&sip->macc_meta_lck);
+ else
+ read_lock(&sdeb_fake_rw_lck);
+ }
+}
+
+static inline void
+sdeb_meta_read_unlock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __release(&sip->macc_meta_lck);
+ else
+ __release(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ read_unlock(&sip->macc_meta_lck);
+ else
+ read_unlock(&sdeb_fake_rw_lck);
+ }
+}
+
+static inline void
+sdeb_meta_write_lock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __acquire(&sip->macc_meta_lck);
+ else
+ __acquire(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ write_lock(&sip->macc_meta_lck);
+ else
+ write_lock(&sdeb_fake_rw_lck);
+ }
+}
+
+static inline void
+sdeb_meta_write_unlock(struct sdeb_store_info *sip)
+{
+ if (sdebug_no_rwlock) {
+ if (sip)
+ __release(&sip->macc_meta_lck);
+ else
+ __release(&sdeb_fake_rw_lck);
+ } else {
+ if (sip)
+ write_unlock(&sip->macc_meta_lck);
+ else
+ write_unlock(&sdeb_fake_rw_lck);
+ }
+}
+
/* Returns number of bytes copied or -1 if error. */
static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
- u32 sg_skip, u64 lba, u32 num, bool do_write)
+ u32 sg_skip, u64 lba, u32 num, bool do_write,
+ bool atomic_write)
{
int ret;
- u64 block, rest = 0;
+ u64 block;
enum dma_data_direction dir;
struct scsi_data_buffer *sdb = &scp->sdb;
u8 *fsp;
+ int i;
+
+ /*
+ * Even though reads are inherently atomic (in this driver), we expect
+ * the atomic flag only for writes.
+ */
+ if (!do_write && atomic_write)
+ return -1;

if (do_write) {
dir = DMA_TO_DEVICE;
@@ -3328,21 +3599,26 @@ static int do_device_access(struct sdeb_store_info *sip, struct scsi_cmnd *scp,
fsp = sip->storep;

block = do_div(lba, sdebug_store_sectors);
- if (block + num > sdebug_store_sectors)
- rest = block + num - sdebug_store_sectors;

- ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
+ /* Only allow 1x atomic write or multiple non-atomic writes at any given time */
+ sdeb_data_lock(sip, atomic_write);
+ for (i = 0; i < num; i++) {
+ /* We shouldn't need to lock for atomic writes, but do it anyway */
+ sdeb_data_sector_lock(sip, do_write);
+ ret = sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
fsp + (block * sdebug_sector_size),
- (num - rest) * sdebug_sector_size, sg_skip, do_write);
- if (ret != (num - rest) * sdebug_sector_size)
- return ret;
-
- if (rest) {
- ret += sg_copy_buffer(sdb->table.sgl, sdb->table.nents,
- fsp, rest * sdebug_sector_size,
- sg_skip + ((num - rest) * sdebug_sector_size),
- do_write);
+ sdebug_sector_size, sg_skip, do_write);
+ sdeb_data_sector_unlock(sip, do_write);
+ if (ret != sdebug_sector_size) {
+ ret += (i * sdebug_sector_size);
+ break;
+ }
+ sg_skip += sdebug_sector_size;
+ if (++block >= sdebug_store_sectors)
+ block = 0;
}
+ ret = num * sdebug_sector_size;
+ sdeb_data_unlock(sip, atomic_write);

return ret;
}
@@ -3518,70 +3794,6 @@ static int prot_verify_read(struct scsi_cmnd *scp, sector_t start_sec,
return ret;
}

-static inline void
-sdeb_read_lock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __acquire(&sip->macc_lck);
- else
- __acquire(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- read_lock(&sip->macc_lck);
- else
- read_lock(&sdeb_fake_rw_lck);
- }
-}
-
-static inline void
-sdeb_read_unlock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __release(&sip->macc_lck);
- else
- __release(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- read_unlock(&sip->macc_lck);
- else
- read_unlock(&sdeb_fake_rw_lck);
- }
-}
-
-static inline void
-sdeb_write_lock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __acquire(&sip->macc_lck);
- else
- __acquire(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- write_lock(&sip->macc_lck);
- else
- write_lock(&sdeb_fake_rw_lck);
- }
-}
-
-static inline void
-sdeb_write_unlock(struct sdeb_store_info *sip)
-{
- if (sdebug_no_rwlock) {
- if (sip)
- __release(&sip->macc_lck);
- else
- __release(&sdeb_fake_rw_lck);
- } else {
- if (sip)
- write_unlock(&sip->macc_lck);
- else
- write_unlock(&sdeb_fake_rw_lck);
- }
-}
-
static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
{
bool check_prot;
@@ -3591,6 +3803,7 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
u64 lba;
struct sdeb_store_info *sip = devip2sip(devip, true);
u8 *cmd = scp->cmnd;
+ bool meta_data_locked = false;

switch (cmd[0]) {
case READ_16:
@@ -3649,6 +3862,10 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
atomic_set(&sdeb_inject_pending, 0);
}

+ /*
+ * When checking device access params, for reads we only check data
+ * versus what is set at init time, so no need to lock.
+ */
ret = check_device_access_params(scp, lba, num, false);
if (ret)
return ret;
@@ -3668,29 +3885,33 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
return check_condition_result;
}

- sdeb_read_lock(sip);
+ if (sdebug_dev_is_zoned(devip) ||
+ (sdebug_dix && scsi_prot_sg_count(scp))) {
+ sdeb_meta_read_lock(sip);
+ meta_data_locked = true;
+ }

/* DIX + T10 DIF */
if (unlikely(sdebug_dix && scsi_prot_sg_count(scp))) {
switch (prot_verify_read(scp, lba, num, ei_lba)) {
case 1: /* Guard tag error */
if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
return check_condition_result;
} else if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
return illegal_condition_result;
}
break;
case 3: /* Reference tag error */
if (cmd[1] >> 5 != 3) { /* RDPROTECT != 3 */
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
return check_condition_result;
} else if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
return illegal_condition_result;
}
@@ -3698,8 +3919,9 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
}
}

- ret = do_device_access(sip, scp, 0, lba, num, false);
- sdeb_read_unlock(sip);
+ ret = do_device_access(sip, scp, 0, lba, num, false, false);
+ if (meta_data_locked)
+ sdeb_meta_read_unlock(sip);
if (unlikely(ret == -1))
return DID_ERROR << 16;

@@ -3888,6 +4110,7 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
u64 lba;
struct sdeb_store_info *sip = devip2sip(devip, true);
u8 *cmd = scp->cmnd;
+ bool meta_data_locked = false;

switch (cmd[0]) {
case WRITE_16:
@@ -3941,10 +4164,17 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
"to DIF device\n");
}

- sdeb_write_lock(sip);
+ if (sdebug_dev_is_zoned(devip) ||
+ (sdebug_dix && scsi_prot_sg_count(scp)) ||
+ scsi_debug_lbp()) {
+ sdeb_meta_write_lock(sip);
+ meta_data_locked = true;
+ }
+
ret = check_device_access_params(scp, lba, num, true);
if (ret) {
- sdeb_write_unlock(sip);
+ if (meta_data_locked)
+ sdeb_meta_write_unlock(sip);
return ret;
}

@@ -3953,22 +4183,22 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
switch (prot_verify_write(scp, lba, num, ei_lba)) {
case 1: /* Guard tag error */
if (scp->prot_flags & SCSI_PROT_GUARD_CHECK) {
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 1);
return illegal_condition_result;
} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 1);
return check_condition_result;
}
break;
case 3: /* Reference tag error */
if (scp->prot_flags & SCSI_PROT_REF_CHECK) {
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ILLEGAL_REQUEST, 0x10, 3);
return illegal_condition_result;
} else if (scp->cmnd[1] >> 5 != 3) { /* WRPROTECT != 3 */
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
mk_sense_buffer(scp, ABORTED_COMMAND, 0x10, 3);
return check_condition_result;
}
@@ -3976,13 +4206,16 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
}
}

- ret = do_device_access(sip, scp, 0, lba, num, true);
+ ret = do_device_access(sip, scp, 0, lba, num, true, false);
if (unlikely(scsi_debug_lbp()))
map_region(sip, lba, num);
+
/* If ZBC zone then bump its write pointer */
if (sdebug_dev_is_zoned(devip))
zbc_inc_wp(devip, lba, num);
- sdeb_write_unlock(sip);
+ if (meta_data_locked)
+ sdeb_meta_write_unlock(sip);
+
if (unlikely(-1 == ret))
return DID_ERROR << 16;
else if (unlikely(sdebug_verbose &&
@@ -4089,7 +4322,8 @@ static int resp_write_scat(struct scsi_cmnd *scp,
goto err_out;
}

- sdeb_write_lock(sip);
+ /* Just keep it simple and always lock for now */
+ sdeb_meta_write_lock(sip);
sg_off = lbdof_blen;
/* Spec says Buffer xfer Length field in number of LBs in dout */
cum_lb = 0;
@@ -4132,7 +4366,11 @@ static int resp_write_scat(struct scsi_cmnd *scp,
}
}

- ret = do_device_access(sip, scp, sg_off, lba, num, true);
+ /*
+ * Write ranges atomically to keep as close to pre-atomic
+ * writes behaviour as possible.
+ */
+ ret = do_device_access(sip, scp, sg_off, lba, num, true, true);
/* If ZBC zone then bump its write pointer */
if (sdebug_dev_is_zoned(devip))
zbc_inc_wp(devip, lba, num);
@@ -4171,7 +4409,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
}
ret = 0;
err_out_unlock:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
err_out:
kfree(lrdp);
return ret;
@@ -4190,14 +4428,16 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
scp->device->hostdata, true);
u8 *fs1p;
u8 *fsp;
+ bool meta_data_locked = false;

- sdeb_write_lock(sip);
+ if (sdebug_dev_is_zoned(devip) || scsi_debug_lbp()) {
+ sdeb_meta_write_lock(sip);
+ meta_data_locked = true;
+ }

ret = check_device_access_params(scp, lba, num, true);
- if (ret) {
- sdeb_write_unlock(sip);
- return ret;
- }
+ if (ret)
+ goto out;

if (unmap && scsi_debug_lbp()) {
unmap_region(sip, lba, num);
@@ -4208,6 +4448,7 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
/* if ndob then zero 1 logical block, else fetch 1 logical block */
fsp = sip->storep;
fs1p = fsp + (block * lb_size);
+ sdeb_data_write_lock(sip);
if (ndob) {
memset(fs1p, 0, lb_size);
ret = 0;
@@ -4215,8 +4456,8 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
ret = fetch_to_dev_buffer(scp, fs1p, lb_size);

if (-1 == ret) {
- sdeb_write_unlock(sip);
- return DID_ERROR << 16;
+ ret = DID_ERROR << 16;
+ goto out;
} else if (sdebug_verbose && !ndob && (ret < lb_size))
sdev_printk(KERN_INFO, scp->device,
"%s: %s: lb size=%u, IO sent=%d bytes\n",
@@ -4233,10 +4474,12 @@ static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,
/* If ZBC zone then bump its write pointer */
if (sdebug_dev_is_zoned(devip))
zbc_inc_wp(devip, lba, num);
+ sdeb_data_write_unlock(sip);
+ ret = 0;
out:
- sdeb_write_unlock(sip);
-
- return 0;
+ if (meta_data_locked)
+ sdeb_meta_write_unlock(sip);
+ return ret;
}

static int resp_write_same_10(struct scsi_cmnd *scp,
@@ -4379,25 +4622,30 @@ static int resp_comp_write(struct scsi_cmnd *scp,
return check_condition_result;
}

- sdeb_write_lock(sip);
-
ret = do_dout_fetch(scp, dnum, arr);
if (ret == -1) {
retval = DID_ERROR << 16;
- goto cleanup;
+ goto cleanup_free;
} else if (sdebug_verbose && (ret < (dnum * lb_size)))
sdev_printk(KERN_INFO, scp->device, "%s: compare_write: cdb "
"indicated=%u, IO sent=%d bytes\n", my_name,
dnum * lb_size, ret);
+
+ sdeb_data_write_lock(sip);
+ sdeb_meta_write_lock(sip);
if (!comp_write_worker(sip, lba, num, arr, false)) {
mk_sense_buffer(scp, MISCOMPARE, MISCOMPARE_VERIFY_ASC, 0);
retval = check_condition_result;
- goto cleanup;
+ goto cleanup_unlock;
}
+
+ /* Cover sip->map_storep (which map_region()) sets with data lock */
if (scsi_debug_lbp())
map_region(sip, lba, num);
-cleanup:
- sdeb_write_unlock(sip);
+cleanup_unlock:
+ sdeb_meta_write_unlock(sip);
+ sdeb_data_write_unlock(sip);
+cleanup_free:
kfree(arr);
return retval;
}
@@ -4441,7 +4689,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)

desc = (void *)&buf[8];

- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);

for (i = 0 ; i < descriptors ; i++) {
unsigned long long lba = get_unaligned_be64(&desc[i].lba);
@@ -4457,7 +4705,7 @@ static int resp_unmap(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
ret = 0;

out:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
kfree(buf);

return ret;
@@ -4570,12 +4818,13 @@ static int resp_pre_fetch(struct scsi_cmnd *scp,
rest = block + nblks - sdebug_store_sectors;

/* Try to bring the PRE-FETCH range into CPU's cache */
- sdeb_read_lock(sip);
+ sdeb_data_read_lock(sip);
prefetch_range(fsp + (sdebug_sector_size * block),
(nblks - rest) * sdebug_sector_size);
if (rest)
prefetch_range(fsp, rest * sdebug_sector_size);
- sdeb_read_unlock(sip);
+
+ sdeb_data_read_unlock(sip);
fini:
if (cmd[1] & 0x2)
res = SDEG_RES_IMMED_MASK;
@@ -4734,7 +4983,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
return check_condition_result;
}
/* Not changing store, so only need read access */
- sdeb_read_lock(sip);
+ sdeb_data_read_lock(sip);

ret = do_dout_fetch(scp, a_num, arr);
if (ret == -1) {
@@ -4756,7 +5005,7 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
goto cleanup;
}
cleanup:
- sdeb_read_unlock(sip);
+ sdeb_data_read_unlock(sip);
kfree(arr);
return ret;
}
@@ -4802,7 +5051,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
return check_condition_result;
}

- sdeb_read_lock(sip);
+ sdeb_meta_read_lock(sip);

desc = arr + 64;
for (lba = zs_lba; lba < sdebug_capacity;
@@ -4900,11 +5149,70 @@ static int resp_report_zones(struct scsi_cmnd *scp,
ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));

fini:
- sdeb_read_unlock(sip);
+ sdeb_meta_read_unlock(sip);
kfree(arr);
return ret;
}

+static int resp_atomic_write(struct scsi_cmnd *scp,
+ struct sdebug_dev_info *devip)
+{
+ struct sdeb_store_info *sip;
+ u8 *cmd = scp->cmnd;
+ u16 boundary, len;
+ u64 lba, lba_tmp;
+ int ret;
+
+ if (!scsi_debug_atomic_write()) {
+ mk_sense_invalid_opcode(scp);
+ return check_condition_result;
+ }
+
+ sip = devip2sip(devip, true);
+
+ lba = get_unaligned_be64(cmd + 2);
+ boundary = get_unaligned_be16(cmd + 10);
+ len = get_unaligned_be16(cmd + 12);
+
+ lba_tmp = lba;
+ if (sdebug_atomic_wr_align &&
+ do_div(lba_tmp, sdebug_atomic_wr_align)) {
+ /* Does not meet alignment requirement */
+ mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+
+ if (sdebug_atomic_wr_gran && len % sdebug_atomic_wr_gran) {
+ /* Does not meet alignment requirement */
+ mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+ return check_condition_result;
+ }
+
+ if (boundary > 0) {
+ if (boundary > sdebug_atomic_wr_max_bndry) {
+ mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+ return check_condition_result;
+ }
+
+ if (len > sdebug_atomic_wr_max_length_bndry) {
+ mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+ return check_condition_result;
+ }
+ } else {
+ if (len > sdebug_atomic_wr_max_length) {
+ mk_sense_invalid_fld(scp, SDEB_IN_CDB, 12, -1);
+ return check_condition_result;
+ }
+ }
+
+ ret = do_device_access(sip, scp, 0, lba, len, true, true);
+ if (unlikely(ret == -1))
+ return DID_ERROR << 16;
+ if (unlikely(ret != len * sdebug_sector_size))
+ return DID_ERROR << 16;
+ return 0;
+}
+
/* Logic transplanted from tcmu-runner, file_zbc.c */
static void zbc_open_all(struct sdebug_dev_info *devip)
{
@@ -4931,8 +5239,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
mk_sense_invalid_opcode(scp);
return check_condition_result;
}
-
- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);

if (all) {
/* Check if all closed zones can be open */
@@ -4981,7 +5288,7 @@ static int resp_open_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)

zbc_open_zone(devip, zsp, true);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}

@@ -5008,7 +5315,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,
return check_condition_result;
}

- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);

if (all) {
zbc_close_all(devip);
@@ -5037,7 +5344,7 @@ static int resp_close_zone(struct scsi_cmnd *scp,

zbc_close_zone(devip, zsp);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}

@@ -5080,7 +5387,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,
return check_condition_result;
}

- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);

if (all) {
zbc_finish_all(devip);
@@ -5109,7 +5416,7 @@ static int resp_finish_zone(struct scsi_cmnd *scp,

zbc_finish_zone(devip, zsp, true);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}

@@ -5160,7 +5467,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
return check_condition_result;
}

- sdeb_write_lock(sip);
+ sdeb_meta_write_lock(sip);

if (all) {
zbc_rwp_all(devip);
@@ -5188,7 +5495,7 @@ static int resp_rwp_zone(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)

zbc_rwp_zone(devip, zsp);
fini:
- sdeb_write_unlock(sip);
+ sdeb_meta_write_unlock(sip);
return res;
}

@@ -5215,6 +5522,7 @@ static void sdebug_q_cmd_complete(struct sdebug_defer *sd_dp)
if (!scp) {
pr_err("scmd=NULL\n");
goto out;
+
}

sdsc = scsi_cmd_priv(scp);
@@ -6152,6 +6460,7 @@ module_param_named(lbprz, sdebug_lbprz, int, S_IRUGO);
module_param_named(lbpu, sdebug_lbpu, int, S_IRUGO);
module_param_named(lbpws, sdebug_lbpws, int, S_IRUGO);
module_param_named(lbpws10, sdebug_lbpws10, int, S_IRUGO);
+module_param_named(atomic_wr, sdebug_atomic_wr, int, S_IRUGO);
module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
@@ -6186,6 +6495,11 @@ module_param_named(unmap_alignment, sdebug_unmap_alignment, int, S_IRUGO);
module_param_named(unmap_granularity, sdebug_unmap_granularity, int, S_IRUGO);
module_param_named(unmap_max_blocks, sdebug_unmap_max_blocks, int, S_IRUGO);
module_param_named(unmap_max_desc, sdebug_unmap_max_desc, int, S_IRUGO);
+module_param_named(atomic_wr_max_length, sdebug_atomic_wr_max_length, int, S_IRUGO);
+module_param_named(atomic_wr_align, sdebug_atomic_wr_align, int, S_IRUGO);
+module_param_named(atomic_wr_gran, sdebug_atomic_wr_gran, int, S_IRUGO);
+module_param_named(atomic_wr_max_length_bndry, sdebug_atomic_wr_max_length_bndry, int, S_IRUGO);
+module_param_named(atomic_wr_max_bndry, sdebug_atomic_wr_max_bndry, int, S_IRUGO);
module_param_named(uuid_ctl, sdebug_uuid_ctl, int, S_IRUGO);
module_param_named(virtual_gb, sdebug_virtual_gb, int, S_IRUGO | S_IWUSR);
module_param_named(vpd_use_hostno, sdebug_vpd_use_hostno, int,
@@ -6229,6 +6543,7 @@ MODULE_PARM_DESC(lbprz,
MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)");
MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)");
MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)");
+MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=1)");
MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
@@ -6260,6 +6575,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)"
MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)");
MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)");
MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)");
+MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)");
+MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)");
+MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)");
+MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)");
MODULE_PARM_DESC(uuid_ctl,
"1->use uuid for lu name, 0->don't, 2->all use same (def=0)");
MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)");
@@ -7406,6 +7726,7 @@ static int __init scsi_debug_init(void)
return -EINVAL;
}
}
+
xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
if (want_store) {
idx = sdebug_add_store();
@@ -7613,7 +7934,9 @@ static int sdebug_add_store(void)
map_region(sip, 0, 2);
}

- rwlock_init(&sip->macc_lck);
+ rwlock_init(&sip->macc_data_lck);
+ rwlock_init(&sip->macc_meta_lck);
+ rwlock_init(&sip->macc_sector_lck);
return (int)n_idx;
err:
sdebug_erase_store((int)n_idx, sip);
--
2.31.1


2024-01-24 12:04:37

by John Garry

[permalink] [raw]
Subject: [PATCH v3 12/15] scsi: sd: Add WRITE_ATOMIC_16 support

Add function sd_setup_atomic_cmnd() to setup an WRITE_ATOMIC_16
CDB for when REQ_ATOMIC flag is set for the request.

Also add trace info.

Signed-off-by: John Garry <[email protected]>
---
drivers/scsi/scsi_trace.c | 22 ++++++++++++++++++++++
drivers/scsi/sd.c | 24 ++++++++++++++++++++++++
include/scsi/scsi_proto.h | 1 +
include/trace/events/scsi.h | 1 +
4 files changed, 48 insertions(+)

diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c
index 41a950075913..3e47c4472a80 100644
--- a/drivers/scsi/scsi_trace.c
+++ b/drivers/scsi/scsi_trace.c
@@ -325,6 +325,26 @@ scsi_trace_zbc_out(struct trace_seq *p, unsigned char *cdb, int len)
return ret;
}

+static const char *
+scsi_trace_atomic_write16_out(struct trace_seq *p, unsigned char *cdb, int len)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+ unsigned int boundary_size;
+ unsigned int nr_blocks;
+ sector_t lba;
+
+ lba = get_unaligned_be64(&cdb[2]);
+ boundary_size = get_unaligned_be16(&cdb[10]);
+ nr_blocks = get_unaligned_be16(&cdb[12]);
+
+ trace_seq_printf(p, "lba=%llu txlen=%u boundary_size=%u",
+ lba, nr_blocks, boundary_size);
+
+ trace_seq_putc(p, 0);
+
+ return ret;
+}
+
static const char *
scsi_trace_varlen(struct trace_seq *p, unsigned char *cdb, int len)
{
@@ -385,6 +405,8 @@ scsi_trace_parse_cdb(struct trace_seq *p, unsigned char *cdb, int len)
return scsi_trace_zbc_in(p, cdb, len);
case ZBC_OUT:
return scsi_trace_zbc_out(p, cdb, len);
+ case WRITE_ATOMIC_16:
+ return scsi_trace_atomic_write16_out(p, cdb, len);
default:
return scsi_trace_misc(p, cdb, len);
}
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 32dfb5327f92..7df05d796387 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1240,6 +1240,26 @@ static int sd_cdl_dld(struct scsi_disk *sdkp, struct scsi_cmnd *scmd)
return (hint - IOPRIO_HINT_DEV_DURATION_LIMIT_1) + 1;
}

+static blk_status_t sd_setup_atomic_cmnd(struct scsi_cmnd *cmd,
+ sector_t lba, unsigned int nr_blocks,
+ bool boundary, unsigned char flags)
+{
+ cmd->cmd_len = 16;
+ cmd->cmnd[0] = WRITE_ATOMIC_16;
+ cmd->cmnd[1] = flags;
+ put_unaligned_be64(lba, &cmd->cmnd[2]);
+ put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+ if (boundary)
+ put_unaligned_be16(nr_blocks, &cmd->cmnd[10]);
+ else
+ put_unaligned_be16(0, &cmd->cmnd[10]);
+ put_unaligned_be16(nr_blocks, &cmd->cmnd[12]);
+ cmd->cmnd[14] = 0;
+ cmd->cmnd[15] = 0;
+
+ return BLK_STS_OK;
+}
+
static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
{
struct request *rq = scsi_cmd_to_rq(cmd);
@@ -1311,6 +1331,10 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks,
protect | fua, dld);
+ } else if (rq->cmd_flags & REQ_ATOMIC && write) {
+ ret = sd_setup_atomic_cmnd(cmd, lba, nr_blocks,
+ sdkp->use_atomic_write_boundary,
+ protect | fua);
} else if (sdp->use_16_for_rw || (nr_blocks > 0xffff)) {
ret = sd_setup_rw16_cmnd(cmd, write, lba, nr_blocks,
protect | fua, dld);
diff --git a/include/scsi/scsi_proto.h b/include/scsi/scsi_proto.h
index 07d65c1f59db..833de67305b5 100644
--- a/include/scsi/scsi_proto.h
+++ b/include/scsi/scsi_proto.h
@@ -119,6 +119,7 @@
#define WRITE_SAME_16 0x93
#define ZBC_OUT 0x94
#define ZBC_IN 0x95
+#define WRITE_ATOMIC_16 0x9c
#define SERVICE_ACTION_BIDIRECTIONAL 0x9d
#define SERVICE_ACTION_IN_16 0x9e
#define SERVICE_ACTION_OUT_16 0x9f
diff --git a/include/trace/events/scsi.h b/include/trace/events/scsi.h
index 8e2d9b1b0e77..05f1945ed204 100644
--- a/include/trace/events/scsi.h
+++ b/include/trace/events/scsi.h
@@ -102,6 +102,7 @@
scsi_opcode_name(WRITE_32), \
scsi_opcode_name(WRITE_SAME_32), \
scsi_opcode_name(ATA_16), \
+ scsi_opcode_name(WRITE_ATOMIC_16), \
scsi_opcode_name(ATA_12))

#define scsi_hostbyte_name(result) { result, #result }
--
2.31.1


2024-01-25 00:52:35

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On Wed, Jan 24, 2024 at 11:38:41AM +0000, John Garry wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5045c84f2516..6a34a5d92088 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
> if (req->cmd_flags & REQ_RAHEAD)
> dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>
> + /*
> + * Ensure that nothing has been sent which cannot be executed
> + * atomically.
> + */
> + if (req->cmd_flags & REQ_ATOMIC) {
> + struct nvme_ns_head *head = ns->head;
> + u32 boundary_bytes = head->atomic_boundary;
> +
> + if (blk_rq_bytes(req) > ns->head->atomic_max)
> + return BLK_STS_IOERR;
> +
> + if (boundary_bytes) {
> + u32 mask = boundary_bytes - 1, imask = ~mask;
> + u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
> + u32 end = start + blk_rq_bytes(req);
> +
> + if (blk_rq_bytes(req) > boundary_bytes)
> + return BLK_STS_IOERR;
> +
> + if (((start & imask) != (end & imask)) &&
> + (end & mask)) {
> + return BLK_STS_IOERR;
> + }
> + }
> + }

Aren't these new fields, atomic_max and atomic_boundary, duplicates of
the equivalent queue limits? Let's just use the queue limits instead.

And couldn't we generically validate the constraints are not violated in
submit_bio_noacct() instead of doing that in the low level driver? The
driver assumes all other requests are already sanity checked, so I don't
think we should change the responsibility for that just for this flag.

2024-01-25 11:30:25

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On 25/01/2024 00:52, Keith Busch wrote:
> On Wed, Jan 24, 2024 at 11:38:41AM +0000, John Garry wrote:
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 5045c84f2516..6a34a5d92088 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>> if (req->cmd_flags & REQ_RAHEAD)
>> dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>>
>> + /*
>> + * Ensure that nothing has been sent which cannot be executed
>> + * atomically.
>> + */
>> + if (req->cmd_flags & REQ_ATOMIC) {
>> + struct nvme_ns_head *head = ns->head;
>> + u32 boundary_bytes = head->atomic_boundary;
>> +
>> + if (blk_rq_bytes(req) > ns->head->atomic_max)
>> + return BLK_STS_IOERR;
>> +
>> + if (boundary_bytes) {
>> + u32 mask = boundary_bytes - 1, imask = ~mask;
>> + u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
>> + u32 end = start + blk_rq_bytes(req);
>> +
>> + if (blk_rq_bytes(req) > boundary_bytes)
>> + return BLK_STS_IOERR;
>> +
>> + if (((start & imask) != (end & imask)) &&
>> + (end & mask)) {
>> + return BLK_STS_IOERR;
>> + }
>> + }
>> + }
> Aren't these new fields, atomic_max and atomic_boundary, duplicates of
> the equivalent queue limits? Let's just use the queue limits instead.

I was making a copy of these limits in the driver out of an abundance of
caution. I suppose the atomic_max and atomic_boundary values won't be
modified in the block layer, so I could use them instead.

>
> And couldn't we generically validate the constraints are not violated in
> submit_bio_noacct() instead of doing that in the low level driver? The
> driver assumes all other requests are already sanity checked, so I don't
> think we should change the responsibility for that just for this flag.

As a safety mechanism, we want to ensure the complete stack is not
giving us out-of-limits atomic writes.

We have limits checks in XFS iomap and fops.c, but we would also want to
ensure that the the block layer is not doing anything it shouldn't be
doing after submit_bio_noacct(), like merging atomic write BIOs which
straddle a boundary or exceed atomic_max (if there were any merging).

The SCSI standard already has provision for error'ing an atomic write
command which exceeds the target atomic write capabilities, while NVMe
doesn't.

BTW, Christoph did mention that he would like to see this:
https://lore.kernel.org/linux-nvme/[email protected]/

Thanks,
John


2024-01-26 03:51:17

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On 1/24/2024 4:52 PM, Keith Busch wrote:
> On Wed, Jan 24, 2024 at 11:38:41AM +0000, John Garry wrote:
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 5045c84f2516..6a34a5d92088 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -911,6 +911,32 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
>> if (req->cmd_flags & REQ_RAHEAD)
>> dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
>>
>> + /*
>> + * Ensure that nothing has been sent which cannot be executed
>> + * atomically.
>> + */
>> + if (req->cmd_flags & REQ_ATOMIC) {
>> + struct nvme_ns_head *head = ns->head;
>> + u32 boundary_bytes = head->atomic_boundary;
>> +
>> + if (blk_rq_bytes(req) > ns->head->atomic_max)
>> + return BLK_STS_IOERR;
>> +
>> + if (boundary_bytes) {
>> + u32 mask = boundary_bytes - 1, imask = ~mask;
>> + u32 start = blk_rq_pos(req) << SECTOR_SHIFT;
>> + u32 end = start + blk_rq_bytes(req);
>> +
>> + if (blk_rq_bytes(req) > boundary_bytes)
>> + return BLK_STS_IOERR;
>> +
>> + if (((start & imask) != (end & imask)) &&
>> + (end & mask)) {
>> + return BLK_STS_IOERR;
>> + }
>> + }
>> + }
>
> Aren't these new fields, atomic_max and atomic_boundary, duplicates of
> the equivalent queue limits? Let's just use the queue limits instead.
>
> And couldn't we generically validate the constraints are not violated in
> submit_bio_noacct() instead of doing that in the low level driver? The
> driver assumes all other requests are already sanity checked, so I don't
> think we should change the responsibility for that just for this flag.
>

does it makes sense to move about code to the helper ? perhaps inline ?

-ck


2024-01-29 06:19:08

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 00/15] block atomic writes

Do you have a git tree with all patches somewhere?


2024-01-29 06:21:01

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On Thu, Jan 25, 2024 at 11:28:22AM +0000, John Garry wrote:
> We have limits checks in XFS iomap and fops.c, but we would also want to
> ensure that the the block layer is not doing anything it shouldn't be doing
> after submit_bio_noacct(), like merging atomic write BIOs which straddle a
> boundary or exceed atomic_max (if there were any merging).
>
> The SCSI standard already has provision for error'ing an atomic write
> command which exceeds the target atomic write capabilities, while NVMe
> doesn't.

Can you get Oracle to propose this for NVMe? It always helps if these
suggestions come from a large buyer of NVMe equipment.

> BTW, Christoph did mention that he would like to see this:
> https://lore.kernel.org/linux-nvme/[email protected]/

I can probably live with a sufficiently low-level block layer check.

2024-01-29 09:20:03

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 00/15] block atomic writes

On 29/01/2024 06:18, Christoph Hellwig wrote:
> Do you have a git tree with all patches somewhere?
>

They should apply cleanly on v6.8-rc1, but you can also check
https://github.com/johnpgarry/linux/commits/atomic-writes-v6.8-v3/ for
this series. The XFS series is at top and can be found at
https://github.com/johnpgarry/linux/tree/atomic-writes-v6.8-v3-fs

Cheers,
John




2024-01-29 09:38:03

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On 29/01/2024 06:20, Christoph Hellwig wrote:
> On Thu, Jan 25, 2024 at 11:28:22AM +0000, John Garry wrote:
>> We have limits checks in XFS iomap and fops.c, but we would also want to
>> ensure that the the block layer is not doing anything it shouldn't be doing
>> after submit_bio_noacct(), like merging atomic write BIOs which straddle a
>> boundary or exceed atomic_max (if there were any merging).
>>
>> The SCSI standard already has provision for error'ing an atomic write
>> command which exceeds the target atomic write capabilities, while NVMe
>> doesn't.
>
> Can you get Oracle to propose this for NVMe? It always helps if these
> suggestions come from a large buyer of NVMe equipment.

I'll let Martin comment on that.

>
>> BTW, Christoph did mention that he would like to see this:
>> https://lore.kernel.org/linux-nvme/[email protected]/
>
> I can probably live with a sufficiently low-level block layer check.

That would probably be in blk_mq_dispatch_rq_list() for block drivers
with .queue_rq set, but I would need to check for a good place for
->queue_rqs . I can't imagine that we just want to inefficiently iter
all rqs at the ->queue_rqs call point.

However considering the nature of this change, it is not a good sign
that we/I need to check... I'd be more inclined to leave as is.

Thanks,
John


2024-01-29 14:39:27

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On Mon, Jan 29, 2024 at 09:36:38AM +0000, John Garry wrote:
> That would probably be in blk_mq_dispatch_rq_list() for block drivers with
> .queue_rq set, but I would need to check for a good place for ->queue_rqs .
> I can't imagine that we just want to inefficiently iter all rqs at the
> ->queue_rqs call point.
>
> However considering the nature of this change, it is not a good sign that
> we/I need to check... I'd be more inclined to leave as is.

Heh. At least early on having the checks in one place in nvme makes
me feel easier for sure. If we can easily use the block limits for
the checks we shouldn't have to keep duplicate values in nvme, though.


2024-02-06 18:49:52

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 00/15] block atomic writes

On 29/01/2024 06:18, Christoph Hellwig wrote:
> Do you have a git tree with all patches somewhere?
>

Hi Christoph,

Please let me know if you had a chance to look at this series or what
your plans are.

BTW, about testing, it would be good to know your thoughts on power-fail
testing.

I have done much testing for ensuring that writes are properly issued to
HW with no undesired splitting/merging, etc for normal operation. I have
also tested crashing the kernel only to see if atomic writes get
corrupted. This all looks ok.

About PF testing, I have an NVMe M.2 drive, but it supports just 4K
nawupf. In addition, unfortunately the port on my machine does not allow
me to power it off, so I need to plug out the power cable to test PF :(

We do also support atomic writes on our SCSI storage servers, but it is
not practically possible to PF them.

For actual PF testing, I have been using fio in crc64 verify mode with a
couple of tweaks to support atomic writes.

What I find from limited testing for XFS and bdev atomic writes on that
NVMe card is that indeed 4K writes are PF-safe, but 16K (this is an
arbitrary large block size which I chose) is not. But I think all cards
will be 4K PF safe, even if not declared.

Thanks,
John



2024-02-10 12:13:22

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v3 00/15] block atomic writes

Can someone add a : after block?

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2024-02-13 06:42:32

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes

On Wed, Jan 24, 2024 at 11:38:40AM +0000, John Garry wrote:
> From: Alan Adamson <[email protected]>
>
> Support reading atomic write registers to fill in request_queue
> properties.
>
> Use following method to calculate limits:
> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
> atomic_write_unit_min = logical_block_size
> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
> atomic_write_boundary = NABSPF

Can you expand this to actually be a real commit log with full
sentences, expanding the NVME field name acronyms and reference
the relevant Sections and Figures in a specific version of the
NVMe specification?

Also some implementation comments:

NVMe has a particularly nasty NABO field in Identify Namespace, which
offsets the boundary. We probably need to reject atomic writes or
severly limit them if this field is set.

Please also read through TP4098(a) and look at the MAM field. As far
as I can tell the patch as-is assumes it always is set to 1.

> +static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
> + struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)

Please avoid the overly long line here.

> + nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);

. and here.


2024-02-13 06:43:03

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

If we don't end up doing the checks in the block layer:

> + /*
> + * Ensure that nothing has been sent which cannot be executed
> + * atomically.
> + */
> + if (req->cmd_flags & REQ_ATOMIC) {
> + struct nvme_ns_head *head = ns->head;
> + u32 boundary_bytes = head->atomic_boundary;

.. please split the checks into a helper. And merge them into the
previous patch.


2024-02-13 14:10:14

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 15/15] nvme: Ensure atomic writes will be executed atomically

On 13/02/2024 06:42, Christoph Hellwig wrote:
> If we don't end up doing the checks in the block layer:
>
>> + /*
>> + * Ensure that nothing has been sent which cannot be executed
>> + * atomically.
>> + */
>> + if (req->cmd_flags & REQ_ATOMIC) {
>> + struct nvme_ns_head *head = ns->head;
>> + u32 boundary_bytes = head->atomic_boundary;
>
> ... please split the checks into a helper.

ok

> And merge them into the
> previous patch.

Fine, if you prefer that.

>


2024-02-13 14:22:24

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes

On 13/02/2024 06:42, Christoph Hellwig wrote:
> On Wed, Jan 24, 2024 at 11:38:40AM +0000, John Garry wrote:
>> From: Alan Adamson <[email protected]>
>>
>> Support reading atomic write registers to fill in request_queue
>> properties.
>>
>> Use following method to calculate limits:
>> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
>> atomic_write_unit_min = logical_block_size
>> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
>> atomic_write_boundary = NABSPF
>
> Can you expand this to actually be a real commit log with full
> sentences, expanding the NVME field name acronyms and reference
> the relevant Sections and Figures in a specific version of the
> NVMe specification?

ok

>
> Also some implementation comments:
>
> NVMe has a particularly nasty NABO field in Identify Namespace, which
> offsets the boundary. We probably need to reject atomic writes or
> severly limit them if this field is set.

ok, and initially we'll just not support atomic writes for NABO != 0

>
> Please also read through TP4098(a) and look at the MAM field.

It's not public, AFAIK.

And I don't think a feature which allows us to straddle boundaries is
too interesting really.

> As far
> as I can tell the patch as-is assumes it always is set to 1.
>
>> +static void nvme_update_atomic_write_disk_info(struct gendisk *disk,
>> + struct nvme_ctrl *ctrl, struct nvme_id_ns *id, u32 bs, u32 atomic_bs)
>
> Please avoid the overly long line here.
>
>> + nvme_update_atomic_write_disk_info(disk, ctrl, id, bs, atomic_bs);
>
> .. and here.

ok, but I think that this will get shorter anyway.

>

Thanks,
John

2024-02-14 08:00:55

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes

On Tue, Feb 13, 2024 at 02:21:25PM +0000, John Garry wrote:
>> Please also read through TP4098(a) and look at the MAM field.
>
> It's not public, AFAIK.

Oracle is a member, so you can take a look at it easily. If we need
it for Linux I can also work with the NVMe Board to release it.

> And I don't think a feature which allows us to straddle boundaries is too
> interesting really.

Without MAM=1 NVMe can't support atomic writes larger than
AWUPF/NAWUPF, which is typically set to the indirection table
size. You're leaving a lot of potential unused with that.


2024-02-14 09:22:25

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes

On 14/02/2024 08:00, Christoph Hellwig wrote:
> On Tue, Feb 13, 2024 at 02:21:25PM +0000, John Garry wrote:
>>> Please also read through TP4098(a) and look at the MAM field.
>>
>> It's not public, AFAIK.
>
> Oracle is a member, so you can take a look at it easily. If we need
> it for Linux I can also work with the NVMe Board to release it.

What I really meant was that I prefer not to quote private TPs in public
domain. I have the doc.

>
>> And I don't think a feature which allows us to straddle boundaries is too
>> interesting really.
>
> Without MAM=1 NVMe can't support atomic writes larger than
> AWUPF/NAWUPF, which is typically set to the indirection table
> size. You're leaving a lot of potential unused with that.
>

atomic_write_unit_max would always be dictated by min of boundary and
AWUPF/NAWUPF. With MAM=1, we could increase atomic_write_max_bytes - but
does it really help us? I mean, atomic_write_max_bytes only comes into
play for merging - do we get much merging for NVMe transports? I am not
sure.

Thanks,
John

2024-02-14 12:29:02

by Nilay Shroff

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes

>Support reading atomic write registers to fill in request_queue

>properties.



>Use following method to calculate limits:

>atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)

>atomic_write_unit_min = logical_block_size

>atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)

>atomic_write_boundary = NABSPF



In case the device doesn't support namespace atomic boundary size (i.e. NABSPF

is zero) then while merging atomic block-IO we should allow merge.



For example, while front/back merging the atomic block IO, we check whether

boundary is defined or not. In case if boundary is not-defined (i.e. it's zero)

then we simply reject merging ateempt (as implemented in

rq_straddles_atomic_write_boundary()).



I am quoting this from NVMe spec (Command Set Specification, revision 1.0a,

Section 2.1.4.3) : "To ensure backwards compatibility, the values reported for

AWUN, AWUPF, and ACWU shall be set such that they are supported even if a

write crosses an atomic boundary. If a controller does not guarantee

atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and

ACWU to 0h (1 LBA)."



Thanks,

--Nilay




2024-02-14 13:16:08

by John Garry

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes

On 14/02/2024 12:27, Nilay Shroff wrote:
>> Support reading atomic write registers to fill in request_queue
>
>> properties.
>
>
>
>> Use following method to calculate limits:
>
>> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
>

You still need to fix that mail client to not add extra blank lines.

>> atomic_write_unit_min = logical_block_size
>
>> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
>
>> atomic_write_boundary = NABSPF
>
>
>
> In case the device doesn't support namespace atomic boundary size (i.e. NABSPF
>
> is zero) then while merging atomic block-IO we should allow merge.
>
>
>
> For example, while front/back merging the atomic block IO, we check whether
>
> boundary is defined or not. In case if boundary is not-defined (i.e. it's zero)
>
> then we simply reject merging ateempt (as implemented in
>
> rq_straddles_atomic_write_boundary()).

Are you sure about that? In rq_straddles_atomic_write_boundary(), if
boundary == 0, then we return false, i.e. there is no boundary, so we
can never be crossing it.

static bool rq_straddles_atomic_write_boundary(struct request *rq,
unsigned int front,
unsigned int back)
{
unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
unsigned int mask, imask;
loff_t start, end;

if (!boundary)
return false;

...
}

And then will not reject a merge for that reason, like:

int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int
nr_segs)
{
...

if (req->cmd_flags & REQ_ATOMIC) {
if (rq_straddles_atomic_write_boundary(req,
0, bio->bi_iter.bi_size)) {
return 0;
}
}

return ll_new_hw_segment(req, bio, nr_segs);
}


>
>
>
> I am quoting this from NVMe spec (Command Set Specification, revision 1.0a,
>
> Section 2.1.4.3) : "To ensure backwards compatibility, the values reported for
>
> AWUN, AWUPF, and ACWU shall be set such that they are supported even if a
>
> write crosses an atomic boundary. If a controller does not guarantee
>
> atomicity across atomic boundaries, the controller shall set AWUN, AWUPF, and
>
> ACWU to 0h (1 LBA)."
>
>
>

Thanks,
John


2024-02-14 17:15:28

by Nilay Shroff

[permalink] [raw]
Subject: Re: [PATCH v3 14/15] nvme: Support atomic writes



On 2/14/24 18:32, John Garry wrote:
> On 14/02/2024 12:27, Nilay Shroff wrote:
>>
>>
>>
>>> Use following method to calculate limits:
>>
>>> atomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)
>>
>
> You still need to fix that mail client to not add extra blank lines.
Yes, I am working on it. I hope it's solved now.
>
>>> atomic_write_unit_min = logical_block_size
>>
>>> atomic_write_unit_max = flp2(NAWUPF ?: AWUPF)
>>
>>> atomic_write_boundary = NABSPF
>>
>>
>>
>> In case the device doesn't support namespace atomic boundary size (i.e. NABSPF
>>
>> is zero) then while merging atomic block-IO we should allow merge.
>>
>>  
>> For example, while front/back merging the atomic block IO, we check whether
>>
>> boundary is defined or not. In case if boundary is not-defined (i.e. it's zero)
>>
>> then we simply reject merging ateempt (as implemented in
>>
>> rq_straddles_atomic_write_boundary()).
>
> Are you sure about that? In rq_straddles_atomic_write_boundary(), if boundary == 0, then we return false, i.e. there is no boundary, so we can never be crossing it.
>
> static bool rq_straddles_atomic_write_boundary(struct request *rq,
> unsigned int front,
> unsigned int back)
> {
>     unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>     unsigned int mask, imask;
>     loff_t start, end;
>
>     if (!boundary)
>         return false;
>
>     ...
> }
>
> And then will not reject a merge for that reason, like:
>
> int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
> {
>     ...
>
>     if (req->cmd_flags & REQ_ATOMIC) {
>         if (rq_straddles_atomic_write_boundary(req,
>             0, bio->bi_iter.bi_size)) {
>             return 0;
>         }
>     }
>
>     return ll_new_hw_segment(req, bio, nr_segs);
> }
>
>
Aargh, you are right. I see that if rq_straddles_atomic_write_boundary() returns true then we avoid merge otherwise the merge is attempted. My bad...

Thanks,
--Nilay