From: Nicholas Bellinger <[email protected]>
Hi MST, MKP, Paolo & Co,
The following is an initial RFC series for allowing vhost/scsi to
accept T10 protection information (PI) as seperate SGLs along side
existing data payload SGLs from within virtio-scsi guest memory.
In it's current form, both ends are using virtio_scsi_cmd_req->prio
for signaling the total protection SGLs vs. data payload SGLs to be
expected. This is due to the current inability of virtio / vhost
to signal a seperate SGL count specific for T10 PI metadata.
AFAICT up until this point the ->prio field has been unused, but
I'm certainly open to better ways of signaling (to vhost) that some
number of metadata iovs are to be expected.. Any thoughts..?
Also included is a new VIRTIO_SCSI_F_T10_PI feature bit to signal
DIF/DIX fabric support. Note this is currently hardcoded to 1 for
testing purposes as v3.14-rc2 code is having problems correctly
proposing + acknowleding feature bits so that virtio_has_feature()
works as expected. Still tracking this seperate bug down..
That said, here's a quick look of the WIP code in action on
v3.14-rc2 host/guest.
Comments..?
--nab
<virtio-scsi + LIO FILEIO LUN in DIF TYPE1 mode>
[ 3.324348] virtio-pci 0000:00:04.0: irq 40 for MSI/MSI-X
[ 3.324407] virtio-pci 0000:00:04.0: irq 41 for MSI/MSI-X
[ 3.324475] virtio-pci 0000:00:04.0: irq 42 for MSI/MSI-X
[ 3.324529] virtio-pci 0000:00:04.0: irq 43 for MSI/MSI-X
[ 3.325421] scsi0 : Virtio SCSI HBA
[ 3.486951] scsi 0:0:1:0: Direct-Access LIO-ORG FILEIO 4.0 PQ: 0 ANSI: 5
[ 3.515702] sd 0:0:1:0: [sda] Enabling DIF Type 1 protection
[ 3.515713] sd 0:0:1:0: [sda] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
[ 3.608441] sd 0:0:1:0: [sda] Write Protect is off
[ 3.632304] sd 0:0:1:0: [sda] Mode Sense: 43 00 00 08
[ 3.662132] sd 0:0:1:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 3.938072] sda: unknown partition table
[ 3.970423] sd 0:0:1:0: [sda] Enabling DIX T10-DIF-TYPE1-CRC protection
[ 3.970425] sd 0:0:1:0: [sda] DIF application tag size 2
<initiator side TYPE1 GENERATE for WRITE + READ -> TYPE1 VERIFY>
[ 113.273961] virtio_scsi_add_cmd: CDB: 0x00 EDTL: 0 sg_count: 0 out_num: 1 in_num: 1
[ 113.277687] virtqueue_add_sgs: total_out: 1 total_in: 1
[ 114.438563] Entering sd_dif_type1_generate >>>>>>>>>>>>>>>
[ 114.438571] TYPE1 Generate: sector: 0 sdt->guard_tag: 0x21b2 sdt->app_tag: 0x0000 sdt->ref_tag: 0
[ 114.438578] TYPE1 Generate: sector: 1 sdt->guard_tag: 0xe534 sdt->app_tag: 0x0000 sdt->ref_tag: 1
[ 114.438582] TYPE1 Generate: sector: 2 sdt->guard_tag: 0x4f0b sdt->app_tag: 0x0000 sdt->ref_tag: 2
[ 114.438586] TYPE1 Generate: sector: 3 sdt->guard_tag: 0xe8cc sdt->app_tag: 0x0000 sdt->ref_tag: 3
[ 114.438590] TYPE1 Generate: sector: 4 sdt->guard_tag: 0xab21 sdt->app_tag: 0x0000 sdt->ref_tag: 4
[ 114.438594] TYPE1 Generate: sector: 5 sdt->guard_tag: 0x6c36 sdt->app_tag: 0x0000 sdt->ref_tag: 5
[ 114.438598] TYPE1 Generate: sector: 6 sdt->guard_tag: 0x5f28 sdt->app_tag: 0x0000 sdt->ref_tag: 6
[ 114.438602] TYPE1 Generate: sector: 7 sdt->guard_tag: 0xecd3 sdt->app_tag: 0x0000 sdt->ref_tag: 7
[ 114.438610] Entering sd_dif_type1_generate >>>>>>>>>>>>>>>
[ 114.438614] TYPE1 Generate: sector: 8 sdt->guard_tag: 0x01b4 sdt->app_tag: 0x0000 sdt->ref_tag: 8
[ 114.438618] TYPE1 Generate: sector: 9 sdt->guard_tag: 0x4362 sdt->app_tag: 0x0000 sdt->ref_tag: 9
[ 114.438622] TYPE1 Generate: sector: 10 sdt->guard_tag: 0x21fa sdt->app_tag: 0x0000 sdt->ref_tag: 10
[ 114.438626] TYPE1 Generate: sector: 11 sdt->guard_tag: 0xdb56 sdt->app_tag: 0x0000 sdt->ref_tag: 11
[ 114.438630] TYPE1 Generate: sector: 12 sdt->guard_tag: 0xb680 sdt->app_tag: 0x0000 sdt->ref_tag: 12
[ 114.438634] TYPE1 Generate: sector: 13 sdt->guard_tag: 0xd00f sdt->app_tag: 0x0000 sdt->ref_tag: 13
[ 114.438638] TYPE1 Generate: sector: 14 sdt->guard_tag: 0xfc4d sdt->app_tag: 0x0000 sdt->ref_tag: 14
[ 114.438643] TYPE1 Generate: sector: 15 sdt->guard_tag: 0x0447 sdt->app_tag: 0x0000 sdt->ref_tag: 15
[ 114.438655] Attaching prot_sg list for WRITE, prot_sg_count: 2
[ 114.440029] virtio_scsi_add_cmd: CDB: 0x2a EDTL: 8192 sg_count: 2 out_num: 3 in_num: 1
[ 114.440029] virtqueue_add_sgs: total_out: 5 total_in: 1
[ 132.407792] Attaching prot_sg list for READ, prot_sg_count: 1
[ 132.408039] virtio_scsi_add_cmd: CDB: 0x28 EDTL: 8192 sg_count: 2 out_num: 1 in_num: 3
[ 132.408039] virtqueue_add_sgs: total_out: 1 total_in: 4
[ 132.417454] TYPE1 Verify sector: 0 sdt->guard_tag: 0x21b2 sdt->app_tag: 0x0000 sdt->ref_tag: 0
[ 132.417459] TYPE1 Verify sector: 1 sdt->guard_tag: 0xe534 sdt->app_tag: 0x0000 sdt->ref_tag: 1
[ 132.417463] TYPE1 Verify sector: 2 sdt->guard_tag: 0x4f0b sdt->app_tag: 0x0000 sdt->ref_tag: 2
[ 132.417468] TYPE1 Verify sector: 3 sdt->guard_tag: 0xe8cc sdt->app_tag: 0x0000 sdt->ref_tag: 3
[ 132.417472] TYPE1 Verify sector: 4 sdt->guard_tag: 0xab21 sdt->app_tag: 0x0000 sdt->ref_tag: 4
[ 132.417476] TYPE1 Verify sector: 5 sdt->guard_tag: 0x6c36 sdt->app_tag: 0x0000 sdt->ref_tag: 5
[ 132.417480] TYPE1 Verify sector: 6 sdt->guard_tag: 0x5f28 sdt->app_tag: 0x0000 sdt->ref_tag: 6
[ 132.417484] TYPE1 Verify sector: 7 sdt->guard_tag: 0xecd3 sdt->app_tag: 0x0000 sdt->ref_tag: 7
[ 132.417488] TYPE1 Verify sector: 8 sdt->guard_tag: 0x01b4 sdt->app_tag: 0x0000 sdt->ref_tag: 8
[ 132.417492] TYPE1 Verify sector: 9 sdt->guard_tag: 0x4362 sdt->app_tag: 0x0000 sdt->ref_tag: 9
[ 132.417496] TYPE1 Verify sector: 10 sdt->guard_tag: 0x21fa sdt->app_tag: 0x0000 sdt->ref_tag: 10
[ 132.417500] TYPE1 Verify sector: 11 sdt->guard_tag: 0xdb56 sdt->app_tag: 0x0000 sdt->ref_tag: 11
[ 132.417504] TYPE1 Verify sector: 12 sdt->guard_tag: 0xb680 sdt->app_tag: 0x0000 sdt->ref_tag: 12
[ 132.417508] TYPE1 Verify sector: 13 sdt->guard_tag: 0xd00f sdt->app_tag: 0x0000 sdt->ref_tag: 13
[ 132.417512] TYPE1 Verify sector: 14 sdt->guard_tag: 0xfc4d sdt->app_tag: 0x0000 sdt->ref_tag: 14
[ 132.417516] TYPE1 Verify sector: 15 sdt->guard_tag: 0x0447 sdt->app_tag: 0x0000 sdt->ref_tag: 15
[ 132.417540] Attaching prot_sg list for READ, prot_sg_count: 1
[ 132.420328] virtio_scsi_add_cmd: CDB: 0x28 EDTL: 8192 sg_count: 2 out_num: 1 in_num: 3
[ 132.421044] virtqueue_add_sgs: total_out: 1 total_in: 4
[ 132.429092] virtio_scsi_add_cmd: CDB: 0x00 EDTL: 0 sg_count: 0 out_num: 1 in_num: 1
[ 132.432759] virtqueue_add_sgs: total_out: 1 total_in: 1
<vhost-scsi host side processing of data + protection SGLs>
[18603.378919] vhost_get_vq_desc: head: 2, out: 1 in: 1
[18603.378922] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb89318c10, len: 51
[18603.378925] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 0, data_direction: 3
[18603.378927] vhost_scsi got command opcode: 0x0, lun: 0
[18603.378933] vhost_get_vq_desc: head: 128, out: 1 in: 1
[18603.378953] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
[18604.546673] vhost_get_vq_desc: head: 2, out: 5 in: 1
[18604.546678] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb87e35d90, len: 51
[18604.546684] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 8192, data_direction: 1
[18604.546688] vhost_scsi got command opcode: 0x2a, lun: 0
[18604.546691] vhost_scsi_map_iov_to_sgl sg ffff880859460000 sgl_count 2
[18604.546695] Mapping iovec ffff880810408908 for 2 pages
[18604.546699] vhost_scsi_map_iov_to_prot prot_sg ffff880855f54000 prot_sgl_count 2
[18604.546708] vhost_get_vq_desc: head: 128, out: 5 in: 1
[18604.546773] DIF WRITE sector: 0 guard_tag: 0x21b2 app_tag: 0x0000 ref_tag: 0
[18604.546779] DIF WRITE sector: 1 guard_tag: 0xe534 app_tag: 0x0000 ref_tag: 1
[18604.546784] DIF WRITE sector: 2 guard_tag: 0x4f0b app_tag: 0x0000 ref_tag: 2
[18604.546788] DIF WRITE sector: 3 guard_tag: 0xe8cc app_tag: 0x0000 ref_tag: 3
[18604.546793] DIF WRITE sector: 4 guard_tag: 0xab21 app_tag: 0x0000 ref_tag: 4
[18604.546797] DIF WRITE sector: 5 guard_tag: 0x6c36 app_tag: 0x0000 ref_tag: 5
[18604.546802] DIF WRITE sector: 6 guard_tag: 0x5f28 app_tag: 0x0000 ref_tag: 6
[18604.546807] DIF WRITE sector: 7 guard_tag: 0xecd3 app_tag: 0x0000 ref_tag: 7
[18604.546811] DIF WRITE sector: 8 guard_tag: 0x01b4 app_tag: 0x0000 ref_tag: 8
[18604.546816] DIF WRITE sector: 9 guard_tag: 0x4362 app_tag: 0x0000 ref_tag: 9
[18604.546820] DIF WRITE sector: 10 guard_tag: 0x21fa app_tag: 0x0000 ref_tag: 10
[18604.546825] DIF WRITE sector: 11 guard_tag: 0xdb56 app_tag: 0x0000 ref_tag: 11
[18604.546829] DIF WRITE sector: 12 guard_tag: 0xb680 app_tag: 0x0000 ref_tag: 12
[18604.546833] DIF WRITE sector: 13 guard_tag: 0xd00f app_tag: 0x0000 ref_tag: 13
[18604.546838] DIF WRITE sector: 14 guard_tag: 0xfc4d app_tag: 0x0000 ref_tag: 14
[18604.546842] DIF WRITE sector: 15 guard_tag: 0x0447 app_tag: 0x0000 ref_tag: 15
[18604.605089] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
[18622.529730] vhost_get_vq_desc: head: 2, out: 1 in: 4
[18622.529734] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb8923d790, len: 51
[18622.529739] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 8192, data_direction: 2
[18622.529741] vhost_scsi got command opcode: 0x28, lun: 0
[18622.529744] vhost_scsi_map_iov_to_sgl sg ffff880859460000 sgl_count 2
[18622.529746] Mapping iovec ffff880810408918 for 2 pages
[18622.529749] vhost_scsi_map_iov_to_prot prot_sg ffff880855f54000 prot_sgl_count 1
[18622.529756] vhost_get_vq_desc: head: 128, out: 1 in: 4
[18622.529918] DIF READ sector: 0 guard_tag: 0x21b2 app_tag: 0x0000 ref_tag: 0
[18622.529924] DIF READ sector: 1 guard_tag: 0xe534 app_tag: 0x0000 ref_tag: 1
[18622.529929] DIF READ sector: 2 guard_tag: 0x4f0b app_tag: 0x0000 ref_tag: 2
[18622.529933] DIF READ sector: 3 guard_tag: 0xe8cc app_tag: 0x0000 ref_tag: 3
[18622.529938] DIF READ sector: 4 guard_tag: 0xab21 app_tag: 0x0000 ref_tag: 4
[18622.529942] DIF READ sector: 5 guard_tag: 0x6c36 app_tag: 0x0000 ref_tag: 5
[18622.529947] DIF READ sector: 6 guard_tag: 0x5f28 app_tag: 0x0000 ref_tag: 6
[18622.529951] DIF READ sector: 7 guard_tag: 0xecd3 app_tag: 0x0000 ref_tag: 7
[18622.529956] DIF READ sector: 8 guard_tag: 0x01b4 app_tag: 0x0000 ref_tag: 8
[18622.529960] DIF READ sector: 9 guard_tag: 0x4362 app_tag: 0x0000 ref_tag: 9
[18622.529965] DIF READ sector: 10 guard_tag: 0x21fa app_tag: 0x0000 ref_tag: 10
[18622.529970] DIF READ sector: 11 guard_tag: 0xdb56 app_tag: 0x0000 ref_tag: 11
[18622.529974] DIF READ sector: 12 guard_tag: 0xb680 app_tag: 0x0000 ref_tag: 12
[18622.529978] DIF READ sector: 13 guard_tag: 0xd00f app_tag: 0x0000 ref_tag: 13
[18622.529983] DIF READ sector: 14 guard_tag: 0xfc4d app_tag: 0x0000 ref_tag: 14
[18622.529987] DIF READ sector: 15 guard_tag: 0x0447 app_tag: 0x0000 ref_tag: 15
[18622.530025] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
[18622.541483] vhost_get_vq_desc: head: 2, out: 1 in: 4
[18622.541486] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb8923d310, len: 51
[18622.541490] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 8192, data_direction: 2
[18622.541492] vhost_scsi got command opcode: 0x28, lun: 0
[18622.541494] vhost_scsi_map_iov_to_sgl sg ffff880859460000 sgl_count 2
[18622.541497] Mapping iovec ffff880810408918 for 2 pages
[18622.541500] vhost_scsi_map_iov_to_prot prot_sg ffff880855f54000 prot_sgl_count 1
[18622.541506] vhost_get_vq_desc: head: 128, out: 1 in: 4
[18622.541584] DIF READ sector: 16 guard_tag: 0xffff app_tag: 0xffff ref_tag: 16
[18622.541587] DIF READ sector: 17 guard_tag: 0xffff app_tag: 0xffff ref_tag: 17
[18622.541589] DIF READ sector: 18 guard_tag: 0xffff app_tag: 0xffff ref_tag: 18
[18622.541591] DIF READ sector: 19 guard_tag: 0xffff app_tag: 0xffff ref_tag: 19
[18622.541594] DIF READ sector: 20 guard_tag: 0xffff app_tag: 0xffff ref_tag: 20
[18622.541596] DIF READ sector: 21 guard_tag: 0xffff app_tag: 0xffff ref_tag: 21
[18622.541598] DIF READ sector: 22 guard_tag: 0xffff app_tag: 0xffff ref_tag: 22
[18622.541600] DIF READ sector: 23 guard_tag: 0xffff app_tag: 0xffff ref_tag: 23
[18622.541603] DIF READ sector: 24 guard_tag: 0xffff app_tag: 0xffff ref_tag: 24
[18622.541605] DIF READ sector: 25 guard_tag: 0xffff app_tag: 0xffff ref_tag: 25
[18622.541607] DIF READ sector: 26 guard_tag: 0xffff app_tag: 0xffff ref_tag: 26
[18622.541612] DIF READ sector: 27 guard_tag: 0xffff app_tag: 0xffff ref_tag: 27
[18622.541617] DIF READ sector: 28 guard_tag: 0xffff app_tag: 0xffff ref_tag: 28
[18622.541621] DIF READ sector: 29 guard_tag: 0xffff app_tag: 0xffff ref_tag: 29
[18622.541625] DIF READ sector: 30 guard_tag: 0xffff app_tag: 0xffff ref_tag: 30
[18622.541630] DIF READ sector: 31 guard_tag: 0xffff app_tag: 0xffff ref_tag: 31
[18622.541665] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
[18622.548226] vhost_get_vq_desc: head: 2, out: 1 in: 1
[18622.548228] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb8923d0d0, len: 51
[18622.548232] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 0, data_direction: 3
[18622.548234] vhost_scsi got command opcode: 0x0, lun: 0
[18622.548240] vhost_get_vq_desc: head: 128, out: 1 in: 1
[18622.548261] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
--
Nicholas Bellinger (6):
vhost/scsi: Move sanity check into vhost_scsi_map_iov_to_sgl
vhost/scsi: Add preallocation of protection SGLs
vhost/scsi: Add T10 PI IOV -> SGL memory mapping logic
vhost/scsi: Enable T10 PI IOV -> SGL memory mapping
vhost/scsi: Add new VIRTIO_SCSI_F_T10_PI feature bit
virtio-scsi: Enable DIF/DIX modes in SCSI host LLD
drivers/scsi/virtio_scsi.c | 33 +++++++--
drivers/vhost/scsi.c | 170 ++++++++++++++++++++++++++++++-------------
include/linux/virtio_scsi.h | 1 +
3 files changed, 147 insertions(+), 57 deletions(-)
--
1.7.10.4
From: Nicholas Bellinger <[email protected]>
This patch adds vhost_scsi_map_iov_to_prot() to perform the mapping of
T10 data integrity memory between virtio iov + struct scatterlist using
get_user_pages_fast() following existing code.
As with vhost_scsi_map_iov_to_sgl(), this does sanity checks against the
total prot_sgl_count vs. pre-allocated SGLs, and loops across protection
iovs using vhost_scsi_map_to_sgl() to perform the actual memory mapping.
Also update tcm_vhost_release_cmd() to release associated tvc_prot_sgl[]
struct page.
Cc: Martin K. Petersen <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
---
drivers/vhost/scsi.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 314cbd4..0cf5948 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -80,6 +80,7 @@ struct tcm_vhost_cmd {
u64 tvc_tag;
/* The number of scatterlists associated with this cmd */
u32 tvc_sgl_count;
+ u32 tvc_prot_sgl_count;
/* Saved unpacked SCSI LUN for tcm_vhost_submission_work() */
u32 tvc_lun;
/* Pointer to the SGL formatted memory from virtio-scsi */
@@ -458,12 +459,16 @@ static void tcm_vhost_release_cmd(struct se_cmd *se_cmd)
struct tcm_vhost_cmd *tv_cmd = container_of(se_cmd,
struct tcm_vhost_cmd, tvc_se_cmd);
struct se_session *se_sess = se_cmd->se_sess;
+ int i;
if (tv_cmd->tvc_sgl_count) {
- u32 i;
for (i = 0; i < tv_cmd->tvc_sgl_count; i++)
put_page(sg_page(&tv_cmd->tvc_sgl[i]));
}
+ if (tv_cmd->tvc_prot_sgl_count) {
+ for (i = 0; i < tv_cmd->tvc_prot_sgl_count; i++)
+ put_page(sg_page(&tv_cmd->tvc_prot_sgl[i]));
+ }
tcm_vhost_put_inflight(tv_cmd->inflight);
percpu_ida_free(&se_sess->sess_tag_pool, se_cmd->map_tag);
--
1.7.10.4
From: Nicholas Bellinger <[email protected]>
This patch adds a VIRTIO_SCSI_F_T10_PI feature bit for signaling
host support of accepting T10 protection information SGLs from
virtio-scsi guest.
Cc: Michael S. Tsirkin <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Martin K. Petersen <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
---
drivers/vhost/scsi.c | 3 ++-
include/linux/virtio_scsi.h | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 3bdb625..734a76c 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -169,7 +169,8 @@ enum {
};
enum {
- VHOST_SCSI_FEATURES = VHOST_FEATURES | (1ULL << VIRTIO_SCSI_F_HOTPLUG)
+ VHOST_SCSI_FEATURES = VHOST_FEATURES | (1ULL << VIRTIO_SCSI_F_HOTPLUG) |
+ (1ULL << VIRTIO_SCSI_F_T10_PI)
};
#define VHOST_SCSI_MAX_TARGET 256
diff --git a/include/linux/virtio_scsi.h b/include/linux/virtio_scsi.h
index 4195b97..dc3764b 100644
--- a/include/linux/virtio_scsi.h
+++ b/include/linux/virtio_scsi.h
@@ -97,6 +97,7 @@ struct virtio_scsi_config {
#define VIRTIO_SCSI_F_INOUT 0
#define VIRTIO_SCSI_F_HOTPLUG 1
#define VIRTIO_SCSI_F_CHANGE 2
+#define VIRTIO_SCSI_F_T10_PI 3
/* Response codes */
#define VIRTIO_SCSI_S_OK 0
--
1.7.10.4
From: Nicholas Bellinger <[email protected]>
This patch updates vhost_scsi_handle_vq() to calculate a data_niov +
prot_niov currently based upon virtio_scsi_cmd_req->prio for figuring
out many of seperate data + protection SGLs to expect from data_num.
Also update tcm_vhost_submission_work() to pass the pre-allocated
cmd->tvc_prot_sgl[] memory into target_submit_cmd_map_sgls().
Cc: Michael S. Tsirkin <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Martin K. Petersen <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
---
drivers/vhost/scsi.c | 45 ++++++++++++++++++++++++++++++---------------
1 file changed, 30 insertions(+), 15 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 0cf5948..3bdb625 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -908,18 +908,15 @@ static void tcm_vhost_submission_work(struct work_struct *work)
container_of(work, struct tcm_vhost_cmd, work);
struct tcm_vhost_nexus *tv_nexus;
struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
- struct scatterlist *sg_ptr, *sg_bidi_ptr = NULL;
- int rc, sg_no_bidi = 0;
+ struct scatterlist *sg_ptr, *sg_prot_ptr = NULL;
+ int rc;
+ /* FIXME: BIDI operation */
if (cmd->tvc_sgl_count) {
sg_ptr = cmd->tvc_sgl;
-/* FIXME: Fix BIDI operation in tcm_vhost_submission_work() */
-#if 0
- if (se_cmd->se_cmd_flags & SCF_BIDI) {
- sg_bidi_ptr = NULL;
- sg_no_bidi = 0;
- }
-#endif
+
+ if (cmd->tvc_prot_sgl_count)
+ sg_prot_ptr = cmd->tvc_prot_sgl;
} else {
sg_ptr = NULL;
}
@@ -930,7 +927,7 @@ static void tcm_vhost_submission_work(struct work_struct *work)
cmd->tvc_lun, cmd->tvc_exp_data_len,
cmd->tvc_task_attr, cmd->tvc_data_direction,
TARGET_SCF_ACK_KREF, sg_ptr, cmd->tvc_sgl_count,
- sg_bidi_ptr, sg_no_bidi, NULL, 0);
+ NULL, 0, sg_prot_ptr, cmd->tvc_prot_sgl_count);
if (rc < 0) {
transport_send_check_condition_and_sense(se_cmd,
TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE, 0);
@@ -966,7 +963,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
struct tcm_vhost_cmd *cmd;
u32 exp_data_len, data_first, data_num, data_direction;
unsigned out, in, i;
- int head, ret;
+ int head, ret, data_niov, prot_niov;
u8 target;
mutex_lock(&vq->mutex);
@@ -998,7 +995,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
break;
}
-/* FIXME: BIDI operation */
+ /* FIXME: BIDI operation */
if (out == 1 && in == 1) {
data_direction = DMA_NONE;
data_first = 0;
@@ -1052,8 +1049,17 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
continue;
}
+ data_niov = data_num;
+ prot_niov = 0;
+
+ if (data_num && v_req.prio != 0) {
+ /* FIXME: Figure out a better way.. */
+ data_niov = data_num - v_req.prio;
+ prot_niov = v_req.prio;
+ }
+
exp_data_len = 0;
- for (i = 0; i < data_num; i++)
+ for (i = 0; i < data_niov; i++)
exp_data_len += vq->iov[data_first + i].iov_len;
cmd = vhost_scsi_get_tag(vq, tpg, &v_req,
@@ -1096,14 +1102,23 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
if (data_direction != DMA_NONE) {
ret = vhost_scsi_map_iov_to_sgl(cmd,
- &vq->iov[data_first], data_num,
+ &vq->iov[data_first], data_niov,
data_direction == DMA_FROM_DEVICE);
if (unlikely(ret)) {
vq_err(vq, "Failed to map iov to sgl\n");
goto err_free;
}
}
-
+ if (prot_niov) {
+ ret = vhost_scsi_map_iov_to_prot(cmd,
+ &vq->iov[data_first + data_niov], prot_niov,
+ data_direction == DMA_FROM_DEVICE);
+ if (unlikely(ret)) {
+ vq_err(vq, "Failed to map iov to"
+ " prot_sgl\n");
+ goto err_free;
+ }
+ }
/*
* Save the descriptor from vhost_get_vq_desc() to be used to
* complete the virtio-scsi request in TCM callback context via
--
1.7.10.4
From: Nicholas Bellinger <[email protected]>
This patch updates virtscsi_probe() to setup all necessary Scsi_Host
level protection resources necessary to enable DIF on virtio-scsi
<-> vhost-scsi LUNs. Currently hardcoded to 1.
It changes virtscsi_add_cmd() so that outgoing / incoming protection
SGLs are attached after each data payload, and is currently using the
unused virtio_scsi_cmd_req->prio bits to signal the total number of
prot_sgl_count for vhost/scsi to expect.
Cc: Paolo Bonzini <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
Cc: Martin K. Petersen <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
---
drivers/scsi/virtio_scsi.c | 33 ++++++++++++++++++++++++++++-----
1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index 16bfd50..294466d 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -440,7 +440,7 @@ static int virtscsi_add_cmd(struct virtqueue *vq,
size_t req_size, size_t resp_size, gfp_t gfp)
{
struct scsi_cmnd *sc = cmd->sc;
- struct scatterlist *sgs[4], req, resp;
+ struct scatterlist *sgs[6], req, resp;
struct sg_table *out, *in;
unsigned out_num = 0, in_num = 0;
@@ -458,16 +458,22 @@ static int virtscsi_add_cmd(struct virtqueue *vq,
sgs[out_num++] = &req;
/* Data-out buffer. */
- if (out)
+ if (out) {
sgs[out_num++] = out->sgl;
+ if (scsi_prot_sg_count(sc))
+ sgs[out_num++] = scsi_prot_sglist(sc);
+ }
/* Response header. */
sg_init_one(&resp, &cmd->resp, resp_size);
sgs[out_num + in_num++] = &resp;
/* Data-in buffer */
- if (in)
+ if (in) {
sgs[out_num + in_num++] = in->sgl;
+ if (scsi_prot_sg_count(sc))
+ sgs[out_num + in_num++] = scsi_prot_sglist(sc);
+ }
return virtqueue_add_sgs(vq, sgs, out_num, in_num, cmd, gfp);
}
@@ -498,6 +504,7 @@ static int virtscsi_queuecommand(struct virtio_scsi *vscsi,
{
struct virtio_scsi_cmd *cmd;
int ret;
+ u8 prio = 0;
struct Scsi_Host *shost = virtio_scsi_host(vscsi->vdev);
BUG_ON(scsi_sg_count(sc) > shost->sg_tablesize);
@@ -515,6 +522,10 @@ static int virtscsi_queuecommand(struct virtio_scsi *vscsi,
memset(cmd, 0, sizeof(*cmd));
cmd->sc = sc;
+
+ if (scsi_prot_sg_count(sc))
+ prio = (u8)scsi_prot_sg_count(sc);
+
cmd->req.cmd = (struct virtio_scsi_cmd_req){
.lun[0] = 1,
.lun[1] = sc->device->id,
@@ -522,7 +533,7 @@ static int virtscsi_queuecommand(struct virtio_scsi *vscsi,
.lun[3] = sc->device->lun & 0xff,
.tag = (unsigned long)sc,
.task_attr = VIRTIO_SCSI_S_SIMPLE,
- .prio = 0,
+ .prio = prio,
.crn = 0,
};
@@ -871,7 +882,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
{
struct Scsi_Host *shost;
struct virtio_scsi *vscsi;
- int err;
+ int err, host_prot;
u32 sg_elems, num_targets;
u32 cmd_per_lun;
u32 num_queues;
@@ -921,6 +932,17 @@ static int virtscsi_probe(struct virtio_device *vdev)
shost->max_id = num_targets;
shost->max_channel = 0;
shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
+
+ /* FIXME: Figure out why this is broken.. */
+ if (1 || virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
+ host_prot = SHOST_DIF_TYPE1_PROTECTION | SHOST_DIF_TYPE2_PROTECTION |
+ SHOST_DIF_TYPE3_PROTECTION | SHOST_DIX_TYPE1_PROTECTION |
+ SHOST_DIX_TYPE2_PROTECTION | SHOST_DIX_TYPE3_PROTECTION;
+
+ scsi_host_set_prot(shost, host_prot);
+ scsi_host_set_guard(shost, SHOST_DIX_GUARD_CRC);
+ }
+
err = scsi_add_host(shost, &vdev->dev);
if (err)
goto scsi_add_host_failed;
@@ -990,6 +1012,7 @@ static struct virtio_device_id id_table[] = {
static unsigned int features[] = {
VIRTIO_SCSI_F_HOTPLUG,
VIRTIO_SCSI_F_CHANGE,
+ VIRTIO_SCSI_F_T10_PI,
};
static struct virtio_driver virtio_scsi_driver = {
--
1.7.10.4
From: Nicholas Bellinger <[email protected]>
This patch updates tcm_vhost_make_nexus() to pre-allocate per descriptor
tcm_vhost_cmd->tvc_prot_sgl[] used to expose protection SGLs from within
virtio-scsi guest memory to vhost-scsi.
Cc: Michael S. Tsirkin <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Martin K. Petersen <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
---
drivers/vhost/scsi.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 31feb47..314cbd4 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -58,6 +58,7 @@
#define TCM_VHOST_DEFAULT_TAGS 256
#define TCM_VHOST_PREALLOC_SGLS 2048
#define TCM_VHOST_PREALLOC_UPAGES 2048
+#define TCM_VHOST_PREALLOC_PROT_SGLS 512
struct vhost_scsi_inflight {
/* Wait for the flush operation to finish */
@@ -83,6 +84,7 @@ struct tcm_vhost_cmd {
u32 tvc_lun;
/* Pointer to the SGL formatted memory from virtio-scsi */
struct scatterlist *tvc_sgl;
+ struct scatterlist *tvc_prot_sgl;
struct page **tvc_upages;
/* Pointer to response */
struct virtio_scsi_cmd_resp __user *tvc_resp;
@@ -717,7 +719,7 @@ vhost_scsi_get_tag(struct vhost_virtqueue *vq,
struct tcm_vhost_cmd *cmd;
struct tcm_vhost_nexus *tv_nexus;
struct se_session *se_sess;
- struct scatterlist *sg;
+ struct scatterlist *sg, *prot_sg;
struct page **pages;
int tag;
@@ -736,10 +738,12 @@ vhost_scsi_get_tag(struct vhost_virtqueue *vq,
cmd = &((struct tcm_vhost_cmd *)se_sess->sess_cmd_map)[tag];
sg = cmd->tvc_sgl;
+ prot_sg = cmd->tvc_prot_sgl;
pages = cmd->tvc_upages;
memset(cmd, 0, sizeof(struct tcm_vhost_cmd));
cmd->tvc_sgl = sg;
+ cmd->tvc_prot_sgl = prot_sg;
cmd->tvc_upages = pages;
cmd->tvc_se_cmd.map_tag = tag;
cmd->tvc_tag = v_req->tag;
@@ -852,6 +856,47 @@ vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *cmd,
return 0;
}
+static int
+vhost_scsi_map_iov_to_prot(struct tcm_vhost_cmd *cmd,
+ struct iovec *iov,
+ int niov,
+ bool write)
+{
+ struct scatterlist *prot_sg = cmd->tvc_prot_sgl;
+ unsigned int prot_sgl_count = 0;
+ int ret, i;
+
+ for (i = 0; i < niov; i++)
+ prot_sgl_count += iov_num_pages(&iov[i]);
+
+ if (prot_sgl_count > TCM_VHOST_PREALLOC_PROT_SGLS) {
+ pr_err("vhost_scsi_map_iov_to_prot() sgl_count: %u greater than"
+ " preallocated TCM_VHOST_PREALLOC_PROT_SGLS: %u\n",
+ prot_sgl_count, TCM_VHOST_PREALLOC_PROT_SGLS);
+ return -ENOBUFS;
+ }
+
+ pr_debug("%s prot_sg %p prot_sgl_count %u\n", __func__,
+ prot_sg, prot_sgl_count);
+ sg_init_table(prot_sg, prot_sgl_count);
+ cmd->tvc_prot_sgl_count = prot_sgl_count;
+
+ for (i = 0; i < niov; i++) {
+ ret = vhost_scsi_map_to_sgl(cmd, prot_sg, prot_sgl_count, &iov[i],
+ cmd->tvc_upages, write);
+ if (ret < 0) {
+ for (i = 0; i < cmd->tvc_prot_sgl_count; i++)
+ put_page(sg_page(&cmd->tvc_prot_sgl[i]));
+
+ cmd->tvc_prot_sgl_count = 0;
+ return ret;
+ }
+ prot_sg += ret;
+ prot_sgl_count -= ret;
+ }
+ return 0;
+}
+
static void tcm_vhost_submission_work(struct work_struct *work)
{
struct tcm_vhost_cmd *cmd =
@@ -1692,6 +1737,7 @@ static void tcm_vhost_free_cmd_map_res(struct tcm_vhost_nexus *nexus,
tv_cmd = &((struct tcm_vhost_cmd *)se_sess->sess_cmd_map)[i];
kfree(tv_cmd->tvc_sgl);
+ kfree(tv_cmd->tvc_prot_sgl);
kfree(tv_cmd->tvc_upages);
}
}
@@ -1750,6 +1796,14 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tpg,
pr_err("Unable to allocate tv_cmd->tvc_upages\n");
goto out;
}
+
+ tv_cmd->tvc_prot_sgl = kzalloc(sizeof(struct scatterlist) *
+ TCM_VHOST_PREALLOC_PROT_SGLS, GFP_KERNEL);
+ if (!tv_cmd->tvc_prot_sgl) {
+ mutex_unlock(&tpg->tv_tpg_mutex);
+ pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
+ goto out;
+ }
}
/*
* Since we are running in 'demo mode' this call with generate a
--
1.7.10.4
From: Nicholas Bellinger <[email protected]>
Move the overflow check for sgl_count > TCM_VHOST_PREALLOC_SGLS into
vhost_scsi_map_iov_to_sgl() so that it's based on the total number
of SGLs for all IOVs, instead of single IOVs.
Also, rename TCM_VHOST_PREALLOC_PAGES -> TCM_VHOST_PREALLOC_UPAGES
to better describe pointers to user-space pages.
Cc: Michael S. Tsirkin <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Martin K. Petersen <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
---
drivers/vhost/scsi.c | 59 +++++++++++++++++++++-----------------------------
1 file changed, 25 insertions(+), 34 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 0a025b8..31feb47 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -57,7 +57,7 @@
#define TCM_VHOST_MAX_CDB_SIZE 32
#define TCM_VHOST_DEFAULT_TAGS 256
#define TCM_VHOST_PREALLOC_SGLS 2048
-#define TCM_VHOST_PREALLOC_PAGES 2048
+#define TCM_VHOST_PREALLOC_UPAGES 2048
struct vhost_scsi_inflight {
/* Wait for the flush operation to finish */
@@ -762,35 +762,28 @@ vhost_scsi_map_to_sgl(struct tcm_vhost_cmd *tv_cmd,
struct scatterlist *sgl,
unsigned int sgl_count,
struct iovec *iov,
- int write)
+ struct page **pages,
+ bool write)
{
unsigned int npages = 0, pages_nr, offset, nbytes;
struct scatterlist *sg = sgl;
void __user *ptr = iov->iov_base;
size_t len = iov->iov_len;
- struct page **pages;
int ret, i;
- if (sgl_count > TCM_VHOST_PREALLOC_SGLS) {
- pr_err("vhost_scsi_map_to_sgl() psgl_count: %u greater than"
- " preallocated TCM_VHOST_PREALLOC_SGLS: %u\n",
- sgl_count, TCM_VHOST_PREALLOC_SGLS);
- return -ENOBUFS;
- }
-
pages_nr = iov_num_pages(iov);
- if (pages_nr > sgl_count)
+ if (pages_nr > sgl_count) {
+ pr_err("vhost_scsi_map_iov_to_sgl() pages_nr: %u greater than"
+ " sgl_count: %u\n", pages_nr, sgl_count);
return -ENOBUFS;
-
- if (pages_nr > TCM_VHOST_PREALLOC_PAGES) {
+ }
+ if (pages_nr > TCM_VHOST_PREALLOC_UPAGES) {
pr_err("vhost_scsi_map_to_sgl() pages_nr: %u greater than"
- " preallocated TCM_VHOST_PREALLOC_PAGES: %u\n",
- pages_nr, TCM_VHOST_PREALLOC_PAGES);
+ " preallocated TCM_VHOST_PREALLOC_UPAGES: %u\n",
+ pages_nr, TCM_VHOST_PREALLOC_UPAGES);
return -ENOBUFS;
}
- pages = tv_cmd->tvc_upages;
-
ret = get_user_pages_fast((unsigned long)ptr, pages_nr, write, pages);
/* No pages were pinned */
if (ret < 0)
@@ -820,33 +813,32 @@ out:
static int
vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *cmd,
struct iovec *iov,
- unsigned int niov,
- int write)
+ int niov,
+ bool write)
{
- int ret;
- unsigned int i;
- u32 sgl_count;
- struct scatterlist *sg;
+ struct scatterlist *sg = cmd->tvc_sgl;
+ unsigned int sgl_count = 0;
+ int ret, i;
- /*
- * Find out how long sglist needs to be
- */
- sgl_count = 0;
for (i = 0; i < niov; i++)
sgl_count += iov_num_pages(&iov[i]);
- /* TODO overflow checking */
+ if (sgl_count > TCM_VHOST_PREALLOC_SGLS) {
+ pr_err("vhost_scsi_map_iov_to_sgl() sgl_count: %u greater than"
+ " preallocated TCM_VHOST_PREALLOC_SGLS: %u\n",
+ sgl_count, TCM_VHOST_PREALLOC_SGLS);
+ return -ENOBUFS;
+ }
- sg = cmd->tvc_sgl;
pr_debug("%s sg %p sgl_count %u\n", __func__, sg, sgl_count);
sg_init_table(sg, sgl_count);
-
cmd->tvc_sgl_count = sgl_count;
- pr_debug("Mapping %u iovecs for %u pages\n", niov, sgl_count);
+ pr_debug("Mapping iovec %p for %u pages\n", &iov[0], sgl_count);
+
for (i = 0; i < niov; i++) {
ret = vhost_scsi_map_to_sgl(cmd, sg, sgl_count, &iov[i],
- write);
+ cmd->tvc_upages, write);
if (ret < 0) {
for (i = 0; i < cmd->tvc_sgl_count; i++)
put_page(sg_page(&cmd->tvc_sgl[i]));
@@ -854,7 +846,6 @@ vhost_scsi_map_iov_to_sgl(struct tcm_vhost_cmd *cmd,
cmd->tvc_sgl_count = 0;
return ret;
}
-
sg += ret;
sgl_count -= ret;
}
@@ -1753,7 +1744,7 @@ static int tcm_vhost_make_nexus(struct tcm_vhost_tpg *tpg,
}
tv_cmd->tvc_upages = kzalloc(sizeof(struct page *) *
- TCM_VHOST_PREALLOC_PAGES, GFP_KERNEL);
+ TCM_VHOST_PREALLOC_UPAGES, GFP_KERNEL);
if (!tv_cmd->tvc_upages) {
mutex_unlock(&tpg->tv_tpg_mutex);
pr_err("Unable to allocate tv_cmd->tvc_upages\n");
--
1.7.10.4
On Mon, Feb 24, 2014 at 05:32:24AM +0000, Nicholas A. Bellinger wrote:
> From: Nicholas Bellinger <[email protected]>
>
> Hi MST, MKP, Paolo & Co,
>
> The following is an initial RFC series for allowing vhost/scsi to
> accept T10 protection information (PI) as seperate SGLs along side
> existing data payload SGLs from within virtio-scsi guest memory.
>
> In it's current form, both ends are using virtio_scsi_cmd_req->prio
> for signaling the total protection SGLs vs. data payload SGLs to be
> expected. This is due to the current inability of virtio / vhost
> to signal a seperate SGL count specific for T10 PI metadata.
>
> AFAICT up until this point the ->prio field has been unused, but
> I'm certainly open to better ways of signaling (to vhost) that some
> number of metadata iovs are to be expected.. Any thoughts..?
>
> Also included is a new VIRTIO_SCSI_F_T10_PI feature bit to signal
> DIF/DIX fabric support. Note this is currently hardcoded to 1 for
> testing purposes as v3.14-rc2 code is having problems correctly
> proposing + acknowleding feature bits so that virtio_has_feature()
> works as expected. Still tracking this seperate bug down..
>
> That said, here's a quick look of the WIP code in action on
> v3.14-rc2 host/guest.
>
> Comments..?
>
> --nab
We'll also need to add the new feature in the
virtio spec.
> <virtio-scsi + LIO FILEIO LUN in DIF TYPE1 mode>
>
> [ 3.324348] virtio-pci 0000:00:04.0: irq 40 for MSI/MSI-X
> [ 3.324407] virtio-pci 0000:00:04.0: irq 41 for MSI/MSI-X
> [ 3.324475] virtio-pci 0000:00:04.0: irq 42 for MSI/MSI-X
> [ 3.324529] virtio-pci 0000:00:04.0: irq 43 for MSI/MSI-X
> [ 3.325421] scsi0 : Virtio SCSI HBA
> [ 3.486951] scsi 0:0:1:0: Direct-Access LIO-ORG FILEIO 4.0 PQ: 0 ANSI: 5
> [ 3.515702] sd 0:0:1:0: [sda] Enabling DIF Type 1 protection
> [ 3.515713] sd 0:0:1:0: [sda] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
> [ 3.608441] sd 0:0:1:0: [sda] Write Protect is off
> [ 3.632304] sd 0:0:1:0: [sda] Mode Sense: 43 00 00 08
> [ 3.662132] sd 0:0:1:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
> [ 3.938072] sda: unknown partition table
> [ 3.970423] sd 0:0:1:0: [sda] Enabling DIX T10-DIF-TYPE1-CRC protection
> [ 3.970425] sd 0:0:1:0: [sda] DIF application tag size 2
>
> <initiator side TYPE1 GENERATE for WRITE + READ -> TYPE1 VERIFY>
>
> [ 113.273961] virtio_scsi_add_cmd: CDB: 0x00 EDTL: 0 sg_count: 0 out_num: 1 in_num: 1
> [ 113.277687] virtqueue_add_sgs: total_out: 1 total_in: 1
> [ 114.438563] Entering sd_dif_type1_generate >>>>>>>>>>>>>>>
> [ 114.438571] TYPE1 Generate: sector: 0 sdt->guard_tag: 0x21b2 sdt->app_tag: 0x0000 sdt->ref_tag: 0
> [ 114.438578] TYPE1 Generate: sector: 1 sdt->guard_tag: 0xe534 sdt->app_tag: 0x0000 sdt->ref_tag: 1
> [ 114.438582] TYPE1 Generate: sector: 2 sdt->guard_tag: 0x4f0b sdt->app_tag: 0x0000 sdt->ref_tag: 2
> [ 114.438586] TYPE1 Generate: sector: 3 sdt->guard_tag: 0xe8cc sdt->app_tag: 0x0000 sdt->ref_tag: 3
> [ 114.438590] TYPE1 Generate: sector: 4 sdt->guard_tag: 0xab21 sdt->app_tag: 0x0000 sdt->ref_tag: 4
> [ 114.438594] TYPE1 Generate: sector: 5 sdt->guard_tag: 0x6c36 sdt->app_tag: 0x0000 sdt->ref_tag: 5
> [ 114.438598] TYPE1 Generate: sector: 6 sdt->guard_tag: 0x5f28 sdt->app_tag: 0x0000 sdt->ref_tag: 6
> [ 114.438602] TYPE1 Generate: sector: 7 sdt->guard_tag: 0xecd3 sdt->app_tag: 0x0000 sdt->ref_tag: 7
> [ 114.438610] Entering sd_dif_type1_generate >>>>>>>>>>>>>>>
> [ 114.438614] TYPE1 Generate: sector: 8 sdt->guard_tag: 0x01b4 sdt->app_tag: 0x0000 sdt->ref_tag: 8
> [ 114.438618] TYPE1 Generate: sector: 9 sdt->guard_tag: 0x4362 sdt->app_tag: 0x0000 sdt->ref_tag: 9
> [ 114.438622] TYPE1 Generate: sector: 10 sdt->guard_tag: 0x21fa sdt->app_tag: 0x0000 sdt->ref_tag: 10
> [ 114.438626] TYPE1 Generate: sector: 11 sdt->guard_tag: 0xdb56 sdt->app_tag: 0x0000 sdt->ref_tag: 11
> [ 114.438630] TYPE1 Generate: sector: 12 sdt->guard_tag: 0xb680 sdt->app_tag: 0x0000 sdt->ref_tag: 12
> [ 114.438634] TYPE1 Generate: sector: 13 sdt->guard_tag: 0xd00f sdt->app_tag: 0x0000 sdt->ref_tag: 13
> [ 114.438638] TYPE1 Generate: sector: 14 sdt->guard_tag: 0xfc4d sdt->app_tag: 0x0000 sdt->ref_tag: 14
> [ 114.438643] TYPE1 Generate: sector: 15 sdt->guard_tag: 0x0447 sdt->app_tag: 0x0000 sdt->ref_tag: 15
> [ 114.438655] Attaching prot_sg list for WRITE, prot_sg_count: 2
> [ 114.440029] virtio_scsi_add_cmd: CDB: 0x2a EDTL: 8192 sg_count: 2 out_num: 3 in_num: 1
> [ 114.440029] virtqueue_add_sgs: total_out: 5 total_in: 1
> [ 132.407792] Attaching prot_sg list for READ, prot_sg_count: 1
> [ 132.408039] virtio_scsi_add_cmd: CDB: 0x28 EDTL: 8192 sg_count: 2 out_num: 1 in_num: 3
> [ 132.408039] virtqueue_add_sgs: total_out: 1 total_in: 4
> [ 132.417454] TYPE1 Verify sector: 0 sdt->guard_tag: 0x21b2 sdt->app_tag: 0x0000 sdt->ref_tag: 0
> [ 132.417459] TYPE1 Verify sector: 1 sdt->guard_tag: 0xe534 sdt->app_tag: 0x0000 sdt->ref_tag: 1
> [ 132.417463] TYPE1 Verify sector: 2 sdt->guard_tag: 0x4f0b sdt->app_tag: 0x0000 sdt->ref_tag: 2
> [ 132.417468] TYPE1 Verify sector: 3 sdt->guard_tag: 0xe8cc sdt->app_tag: 0x0000 sdt->ref_tag: 3
> [ 132.417472] TYPE1 Verify sector: 4 sdt->guard_tag: 0xab21 sdt->app_tag: 0x0000 sdt->ref_tag: 4
> [ 132.417476] TYPE1 Verify sector: 5 sdt->guard_tag: 0x6c36 sdt->app_tag: 0x0000 sdt->ref_tag: 5
> [ 132.417480] TYPE1 Verify sector: 6 sdt->guard_tag: 0x5f28 sdt->app_tag: 0x0000 sdt->ref_tag: 6
> [ 132.417484] TYPE1 Verify sector: 7 sdt->guard_tag: 0xecd3 sdt->app_tag: 0x0000 sdt->ref_tag: 7
> [ 132.417488] TYPE1 Verify sector: 8 sdt->guard_tag: 0x01b4 sdt->app_tag: 0x0000 sdt->ref_tag: 8
> [ 132.417492] TYPE1 Verify sector: 9 sdt->guard_tag: 0x4362 sdt->app_tag: 0x0000 sdt->ref_tag: 9
> [ 132.417496] TYPE1 Verify sector: 10 sdt->guard_tag: 0x21fa sdt->app_tag: 0x0000 sdt->ref_tag: 10
> [ 132.417500] TYPE1 Verify sector: 11 sdt->guard_tag: 0xdb56 sdt->app_tag: 0x0000 sdt->ref_tag: 11
> [ 132.417504] TYPE1 Verify sector: 12 sdt->guard_tag: 0xb680 sdt->app_tag: 0x0000 sdt->ref_tag: 12
> [ 132.417508] TYPE1 Verify sector: 13 sdt->guard_tag: 0xd00f sdt->app_tag: 0x0000 sdt->ref_tag: 13
> [ 132.417512] TYPE1 Verify sector: 14 sdt->guard_tag: 0xfc4d sdt->app_tag: 0x0000 sdt->ref_tag: 14
> [ 132.417516] TYPE1 Verify sector: 15 sdt->guard_tag: 0x0447 sdt->app_tag: 0x0000 sdt->ref_tag: 15
> [ 132.417540] Attaching prot_sg list for READ, prot_sg_count: 1
> [ 132.420328] virtio_scsi_add_cmd: CDB: 0x28 EDTL: 8192 sg_count: 2 out_num: 1 in_num: 3
> [ 132.421044] virtqueue_add_sgs: total_out: 1 total_in: 4
> [ 132.429092] virtio_scsi_add_cmd: CDB: 0x00 EDTL: 0 sg_count: 0 out_num: 1 in_num: 1
> [ 132.432759] virtqueue_add_sgs: total_out: 1 total_in: 1
>
> <vhost-scsi host side processing of data + protection SGLs>
>
> [18603.378919] vhost_get_vq_desc: head: 2, out: 1 in: 1
> [18603.378922] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb89318c10, len: 51
> [18603.378925] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 0, data_direction: 3
> [18603.378927] vhost_scsi got command opcode: 0x0, lun: 0
> [18603.378933] vhost_get_vq_desc: head: 128, out: 1 in: 1
> [18603.378953] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
> [18604.546673] vhost_get_vq_desc: head: 2, out: 5 in: 1
> [18604.546678] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb87e35d90, len: 51
> [18604.546684] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 8192, data_direction: 1
> [18604.546688] vhost_scsi got command opcode: 0x2a, lun: 0
> [18604.546691] vhost_scsi_map_iov_to_sgl sg ffff880859460000 sgl_count 2
> [18604.546695] Mapping iovec ffff880810408908 for 2 pages
> [18604.546699] vhost_scsi_map_iov_to_prot prot_sg ffff880855f54000 prot_sgl_count 2
> [18604.546708] vhost_get_vq_desc: head: 128, out: 5 in: 1
> [18604.546773] DIF WRITE sector: 0 guard_tag: 0x21b2 app_tag: 0x0000 ref_tag: 0
> [18604.546779] DIF WRITE sector: 1 guard_tag: 0xe534 app_tag: 0x0000 ref_tag: 1
> [18604.546784] DIF WRITE sector: 2 guard_tag: 0x4f0b app_tag: 0x0000 ref_tag: 2
> [18604.546788] DIF WRITE sector: 3 guard_tag: 0xe8cc app_tag: 0x0000 ref_tag: 3
> [18604.546793] DIF WRITE sector: 4 guard_tag: 0xab21 app_tag: 0x0000 ref_tag: 4
> [18604.546797] DIF WRITE sector: 5 guard_tag: 0x6c36 app_tag: 0x0000 ref_tag: 5
> [18604.546802] DIF WRITE sector: 6 guard_tag: 0x5f28 app_tag: 0x0000 ref_tag: 6
> [18604.546807] DIF WRITE sector: 7 guard_tag: 0xecd3 app_tag: 0x0000 ref_tag: 7
> [18604.546811] DIF WRITE sector: 8 guard_tag: 0x01b4 app_tag: 0x0000 ref_tag: 8
> [18604.546816] DIF WRITE sector: 9 guard_tag: 0x4362 app_tag: 0x0000 ref_tag: 9
> [18604.546820] DIF WRITE sector: 10 guard_tag: 0x21fa app_tag: 0x0000 ref_tag: 10
> [18604.546825] DIF WRITE sector: 11 guard_tag: 0xdb56 app_tag: 0x0000 ref_tag: 11
> [18604.546829] DIF WRITE sector: 12 guard_tag: 0xb680 app_tag: 0x0000 ref_tag: 12
> [18604.546833] DIF WRITE sector: 13 guard_tag: 0xd00f app_tag: 0x0000 ref_tag: 13
> [18604.546838] DIF WRITE sector: 14 guard_tag: 0xfc4d app_tag: 0x0000 ref_tag: 14
> [18604.546842] DIF WRITE sector: 15 guard_tag: 0x0447 app_tag: 0x0000 ref_tag: 15
> [18604.605089] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
> [18622.529730] vhost_get_vq_desc: head: 2, out: 1 in: 4
> [18622.529734] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb8923d790, len: 51
> [18622.529739] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 8192, data_direction: 2
> [18622.529741] vhost_scsi got command opcode: 0x28, lun: 0
> [18622.529744] vhost_scsi_map_iov_to_sgl sg ffff880859460000 sgl_count 2
> [18622.529746] Mapping iovec ffff880810408918 for 2 pages
> [18622.529749] vhost_scsi_map_iov_to_prot prot_sg ffff880855f54000 prot_sgl_count 1
> [18622.529756] vhost_get_vq_desc: head: 128, out: 1 in: 4
> [18622.529918] DIF READ sector: 0 guard_tag: 0x21b2 app_tag: 0x0000 ref_tag: 0
> [18622.529924] DIF READ sector: 1 guard_tag: 0xe534 app_tag: 0x0000 ref_tag: 1
> [18622.529929] DIF READ sector: 2 guard_tag: 0x4f0b app_tag: 0x0000 ref_tag: 2
> [18622.529933] DIF READ sector: 3 guard_tag: 0xe8cc app_tag: 0x0000 ref_tag: 3
> [18622.529938] DIF READ sector: 4 guard_tag: 0xab21 app_tag: 0x0000 ref_tag: 4
> [18622.529942] DIF READ sector: 5 guard_tag: 0x6c36 app_tag: 0x0000 ref_tag: 5
> [18622.529947] DIF READ sector: 6 guard_tag: 0x5f28 app_tag: 0x0000 ref_tag: 6
> [18622.529951] DIF READ sector: 7 guard_tag: 0xecd3 app_tag: 0x0000 ref_tag: 7
> [18622.529956] DIF READ sector: 8 guard_tag: 0x01b4 app_tag: 0x0000 ref_tag: 8
> [18622.529960] DIF READ sector: 9 guard_tag: 0x4362 app_tag: 0x0000 ref_tag: 9
> [18622.529965] DIF READ sector: 10 guard_tag: 0x21fa app_tag: 0x0000 ref_tag: 10
> [18622.529970] DIF READ sector: 11 guard_tag: 0xdb56 app_tag: 0x0000 ref_tag: 11
> [18622.529974] DIF READ sector: 12 guard_tag: 0xb680 app_tag: 0x0000 ref_tag: 12
> [18622.529978] DIF READ sector: 13 guard_tag: 0xd00f app_tag: 0x0000 ref_tag: 13
> [18622.529983] DIF READ sector: 14 guard_tag: 0xfc4d app_tag: 0x0000 ref_tag: 14
> [18622.529987] DIF READ sector: 15 guard_tag: 0x0447 app_tag: 0x0000 ref_tag: 15
> [18622.530025] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
> [18622.541483] vhost_get_vq_desc: head: 2, out: 1 in: 4
> [18622.541486] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb8923d310, len: 51
> [18622.541490] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 8192, data_direction: 2
> [18622.541492] vhost_scsi got command opcode: 0x28, lun: 0
> [18622.541494] vhost_scsi_map_iov_to_sgl sg ffff880859460000 sgl_count 2
> [18622.541497] Mapping iovec ffff880810408918 for 2 pages
> [18622.541500] vhost_scsi_map_iov_to_prot prot_sg ffff880855f54000 prot_sgl_count 1
> [18622.541506] vhost_get_vq_desc: head: 128, out: 1 in: 4
> [18622.541584] DIF READ sector: 16 guard_tag: 0xffff app_tag: 0xffff ref_tag: 16
> [18622.541587] DIF READ sector: 17 guard_tag: 0xffff app_tag: 0xffff ref_tag: 17
> [18622.541589] DIF READ sector: 18 guard_tag: 0xffff app_tag: 0xffff ref_tag: 18
> [18622.541591] DIF READ sector: 19 guard_tag: 0xffff app_tag: 0xffff ref_tag: 19
> [18622.541594] DIF READ sector: 20 guard_tag: 0xffff app_tag: 0xffff ref_tag: 20
> [18622.541596] DIF READ sector: 21 guard_tag: 0xffff app_tag: 0xffff ref_tag: 21
> [18622.541598] DIF READ sector: 22 guard_tag: 0xffff app_tag: 0xffff ref_tag: 22
> [18622.541600] DIF READ sector: 23 guard_tag: 0xffff app_tag: 0xffff ref_tag: 23
> [18622.541603] DIF READ sector: 24 guard_tag: 0xffff app_tag: 0xffff ref_tag: 24
> [18622.541605] DIF READ sector: 25 guard_tag: 0xffff app_tag: 0xffff ref_tag: 25
> [18622.541607] DIF READ sector: 26 guard_tag: 0xffff app_tag: 0xffff ref_tag: 26
> [18622.541612] DIF READ sector: 27 guard_tag: 0xffff app_tag: 0xffff ref_tag: 27
> [18622.541617] DIF READ sector: 28 guard_tag: 0xffff app_tag: 0xffff ref_tag: 28
> [18622.541621] DIF READ sector: 29 guard_tag: 0xffff app_tag: 0xffff ref_tag: 29
> [18622.541625] DIF READ sector: 30 guard_tag: 0xffff app_tag: 0xffff ref_tag: 30
> [18622.541630] DIF READ sector: 31 guard_tag: 0xffff app_tag: 0xffff ref_tag: 31
> [18622.541665] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
> [18622.548226] vhost_get_vq_desc: head: 2, out: 1 in: 1
> [18622.548228] Calling __copy_from_user: vq->iov[0].iov_base: 00007feb8923d0d0, len: 51
> [18622.548232] Allocated tv_cmd: ffff88080349fb60 exp_data_len: 0, data_direction: 3
> [18622.548234] vhost_scsi got command opcode: 0x0, lun: 0
> [18622.548240] vhost_get_vq_desc: head: 128, out: 1 in: 1
> [18622.548261] vhost_scsi_complete_cmd_work tv_cmd ffff88080349fb60 resid 0 status 0x0
>
> --
>
> Nicholas Bellinger (6):
> vhost/scsi: Move sanity check into vhost_scsi_map_iov_to_sgl
> vhost/scsi: Add preallocation of protection SGLs
> vhost/scsi: Add T10 PI IOV -> SGL memory mapping logic
> vhost/scsi: Enable T10 PI IOV -> SGL memory mapping
> vhost/scsi: Add new VIRTIO_SCSI_F_T10_PI feature bit
> virtio-scsi: Enable DIF/DIX modes in SCSI host LLD
>
> drivers/scsi/virtio_scsi.c | 33 +++++++--
> drivers/vhost/scsi.c | 170 ++++++++++++++++++++++++++++++-------------
> include/linux/virtio_scsi.h | 1 +
> 3 files changed, 147 insertions(+), 57 deletions(-)
>
> --
> 1.7.10.4
Il 24/02/2014 06:32, Nicholas A. Bellinger ha scritto:
> AFAICT up until this point the ->prio field has been unused, but
> I'm certainly open to better ways of signaling (to vhost) that some
> number of metadata iovs are to be expected.. Any thoughts..?
Hi nab,
the virtio-scsi side of the patch is nice and readable. As requested,
here are my thoughts on how to add it to the standard.
The ->prio field is there to mimic SAM's command priority field (8.7 in
my copy of the standard). I'd rather leave it alone; I understand this
is the main reason why this patch is RFC.
Since we have a new feature bit, we can add a new element before the
cdb. It could be a count of scatter/gather list like you did here, or
it could be a byte count. Even better, we can add _two_ new fields, one
for protection data out and one for protection data in.
Also, do we need an equivalent of the residual field, but for metadata?
Finally, any reason why you put the data sg elements before the metadata
sg elements? I would have thought that processing is a bit simpler if
either the metadata comes first, or you store in the command header the
data count (either sg or byte). Because the virtio buffers form a
linked list, it's a bit backwards to put metadata last, and store
metadata count in the command header; it prevents you from processing
the buffers online because you don't know when the metadata starts.
Even though the Linux virtio layer always gives you a buffer count, this
need not be the case in general.
Paolo
On Mon, 2014-02-24 at 11:23 +0100, Paolo Bonzini wrote:
> Il 24/02/2014 06:32, Nicholas A. Bellinger ha scritto:
> > AFAICT up until this point the ->prio field has been unused, but
> > I'm certainly open to better ways of signaling (to vhost) that some
> > number of metadata iovs are to be expected.. Any thoughts..?
>
> Hi nab,
>
> the virtio-scsi side of the patch is nice and readable. As requested,
> here are my thoughts on how to add it to the standard.
>
> The ->prio field is there to mimic SAM's command priority field (8.7 in
> my copy of the standard). I'd rather leave it alone; I understand this
> is the main reason why this patch is RFC.
Yes. ;)
>
> Since we have a new feature bit, we can add a new element before the
> cdb. It could be a count of scatter/gather list like you did here, or
> it could be a byte count. Even better, we can add _two_ new fields, one
> for protection data out and one for protection data in.
>
Having two 16-bit fields for data out / data in protection count in the
command header should be fine.
So that said, adding a new virtio_scsi_cmd_req_pi definition per your
recommendation, and will update the series to use this when the
VIRTIO_SCSI_F_T10_PI feature bit has been negotiated on both ends.
> Also, do we need an equivalent of the residual field, but for metadata?
>
Mmm, at least for PI I don't think a residual field is necessary.
Any time the metadata is not fully read on outgoing WRITEs, or written
on incoming READs the next hop performing a VERIFY operation will end up
failing with a GUARD or REFERENCE TAG failure.
MKP..?
> Finally, any reason why you put the data sg elements before the metadata
> sg elements?
Nope, no particular reason for this.
> I would have thought that processing is a bit simpler if
> either the metadata comes first, or you store in the command header the
> data count (either sg or byte). Because the virtio buffers form a
> linked list, it's a bit backwards to put metadata last, and store
> metadata count in the command header; it prevents you from processing
> the buffers online because you don't know when the metadata starts.
> Even though the Linux virtio layer always gives you a buffer count, this
> need not be the case in general.
>
No objection here. Updating the patch series to place protection
information ahead of the actual data payload.
--nab
>>>>> "nab" == Nicholas A Bellinger <[email protected]> writes:
nab> Mmm, at least for PI I don't think a residual field is necessary.
nab> Any time the metadata is not fully read on outgoing WRITEs, or
nab> written on incoming READs the next hop performing a VERIFY
nab> operation will end up failing with a GUARD or REFERENCE TAG
nab> failure.
nab> MKP..?
Yeah, that should be OK.
--
Martin K. Petersen Oracle Linux Engineering