Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.
This initial implementation adds the necessary Sub-CRQ framework and implements
the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
HW backed channels. The event pool and locking still leverages the legacy single
queue implementation, and as such lock contention is problematic when increasing
the number of queues. However, this initial work demonstrates a 1.2x factor
increase in IOPs when configured with two HW queues despite lock contention.
changes in v2:
* Patch 4: changed firmware support logging to dev_warn_once
* Patch 6: adjusted locking
* Patch 15: dropped logging verbosity, moved cancel event tracking into subqueue
* Patch 17: removed write permission for migration module parameters
drive hard reset after update to num of scsi channels
changes in v2:
* Patch 4: NULL'd scsi_scrq reference after deallocation
* Patch 6: Added switch case to handle XPORT event
* Patch 9: fixed ibmvfc_event leak and double free
* added support for cancel command with MQ
* added parameter toggles for MQ settings
Tyrel Datwyler (18):
ibmvfc: add vhost fields and defaults for MQ enablement
ibmvfc: define hcall wrapper for registering a Sub-CRQ
ibmvfc: add Subordinate CRQ definitions
ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels
ibmvfc: add Sub-CRQ IRQ enable/disable routine
ibmvfc: add handlers to drain and complete Sub-CRQ responses
ibmvfc: define Sub-CRQ interrupt handler routine
ibmvfc: map/request irq and register Sub-CRQ interrupt handler
ibmvfc: implement channel enquiry and setup commands
ibmvfc: advertise client support for using hardware channels
ibmvfc: set and track hw queue in ibmvfc_event struct
ibmvfc: send commands down HW Sub-CRQ when channelized
ibmvfc: register Sub-CRQ handles with VIOS during channel setup
ibmvfc: add cancel mad initialization helper
ibmvfc: send Cancel MAD down each hw scsi channel
ibmvfc: enable MQ and set reasonable defaults
ibmvfc: provide modules parameters for MQ settings
ibmvfc: drop host lock when completing commands in CRQ
drivers/scsi/ibmvscsi/ibmvfc.c | 721 +++++++++++++++++++++++++++++----
drivers/scsi/ibmvscsi/ibmvfc.h | 79 +++-
2 files changed, 711 insertions(+), 89 deletions(-)
--
2.27.0
Simple handler that calls Sub-CRQ drain routine directly.
Signed-off-by: Tyrel Datwyler <[email protected]>
Reviewed-by: Brian King <[email protected]>
---
drivers/scsi/ibmvscsi/ibmvfc.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index b61ae1df21e5..649268996a5c 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -3461,6 +3461,16 @@ static void ibmvfc_drain_sub_crq(struct ibmvfc_sub_queue *scrq)
spin_unlock_irqrestore(scrq->vhost->host->host_lock, flags);
}
+static irqreturn_t ibmvfc_interrupt_scsi(int irq, void *scrq_instance)
+{
+ struct ibmvfc_sub_queue *scrq = (struct ibmvfc_sub_queue *)scrq_instance;
+
+ ibmvfc_toggle_scrq_irq(scrq, 0);
+ ibmvfc_drain_sub_crq(scrq);
+
+ return IRQ_HANDLED;
+}
+
/**
* ibmvfc_init_tgt - Set the next init job step for the target
* @tgt: ibmvfc target struct
--
2.27.0
Sub-CRQs are registred with firmware via a hypercall. Abstract that
interface into a simpler helper function.
Signed-off-by: Tyrel Datwyler <[email protected]>
Reviewed-by: Brian King <[email protected]>
---
drivers/scsi/ibmvscsi/ibmvfc.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index f1d677a7423d..64674054dbae 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -138,6 +138,20 @@ static void ibmvfc_tgt_move_login(struct ibmvfc_target *);
static const char *unknown_error = "unknown error";
+static long h_reg_sub_crq(unsigned long unit_address, unsigned long ioba,
+ unsigned long length, unsigned long *cookie,
+ unsigned long *irq)
+{
+ unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+ long rc;
+
+ rc = plpar_hcall(H_REG_SUB_CRQ, retbuf, unit_address, ioba, length);
+ *cookie = retbuf[0];
+ *irq = retbuf[1];
+
+ return rc;
+}
+
static int ibmvfc_check_caps(struct ibmvfc_host *vhost, unsigned long cap_flags)
{
u64 host_caps = be64_to_cpu(vhost->login_buf->resp.capabilities);
--
2.27.0
Allocate a set of Sub-CRQs in advance. During channel setup the client
and VIOS negotiate the number of queues the VIOS supports and the number
that the client desires to request. Its possible that the final channel
resources allocated is less than requested, but the client is still
responsible for sending handles for every queue it is hoping for.
Also, provide deallocation cleanup routines.
Signed-off-by: Tyrel Datwyler <[email protected]>
---
drivers/scsi/ibmvscsi/ibmvfc.c | 129 +++++++++++++++++++++++++++++++++
drivers/scsi/ibmvscsi/ibmvfc.h | 1 +
2 files changed, 130 insertions(+)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index 64674054dbae..f879be666c84 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -793,6 +793,8 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
unsigned long flags;
struct vio_dev *vdev = to_vio_dev(vhost->dev);
struct ibmvfc_crq_queue *crq = &vhost->crq;
+ struct ibmvfc_sub_queue *scrq;
+ int i;
/* Close the CRQ */
do {
@@ -809,6 +811,14 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
memset(crq->msgs, 0, PAGE_SIZE);
crq->cur = 0;
+ if (vhost->scsi_scrqs.scrqs) {
+ for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) {
+ scrq = &vhost->scsi_scrqs.scrqs[i];
+ memset(scrq->msgs, 0, PAGE_SIZE);
+ scrq->cur = 0;
+ }
+ }
+
/* And re-open it again */
rc = plpar_hcall_norets(H_REG_CRQ, vdev->unit_address,
crq->msg_token, PAGE_SIZE);
@@ -4983,6 +4993,118 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost)
return retrc;
}
+static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
+ int index)
+{
+ struct device *dev = vhost->dev;
+ struct vio_dev *vdev = to_vio_dev(dev);
+ struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
+ int rc = -ENOMEM;
+
+ ENTER;
+
+ scrq->msgs = (struct ibmvfc_sub_crq *)get_zeroed_page(GFP_KERNEL);
+ if (!scrq->msgs)
+ return rc;
+
+ scrq->size = PAGE_SIZE / sizeof(*scrq->msgs);
+ scrq->msg_token = dma_map_single(dev, scrq->msgs, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+
+ if (dma_mapping_error(dev, scrq->msg_token))
+ goto dma_map_failed;
+
+ rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,
+ &scrq->cookie, &scrq->hw_irq);
+
+ if (rc) {
+ dev_warn(dev, "Error registering sub-crq: %d\n", rc);
+ if (rc == H_PARAMETER)
+ dev_warn_once(dev, "Firmware may not support MQ\n");
+ goto reg_failed;
+ }
+
+ scrq->hwq_id = index;
+ scrq->vhost = vhost;
+
+ LEAVE;
+ return 0;
+
+reg_failed:
+ dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
+dma_map_failed:
+ free_page((unsigned long)scrq->msgs);
+ LEAVE;
+ return rc;
+}
+
+static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
+{
+ struct device *dev = vhost->dev;
+ struct vio_dev *vdev = to_vio_dev(dev);
+ struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
+ long rc;
+
+ ENTER;
+
+ do {
+ rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address,
+ scrq->cookie);
+ } while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
+
+ if (rc)
+ dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);
+
+ dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
+ free_page((unsigned long)scrq->msgs);
+ LEAVE;
+}
+
+static int ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
+{
+ int i, j;
+
+ ENTER;
+
+ vhost->scsi_scrqs.scrqs = kcalloc(IBMVFC_SCSI_HW_QUEUES,
+ sizeof(*vhost->scsi_scrqs.scrqs),
+ GFP_KERNEL);
+ if (!vhost->scsi_scrqs.scrqs)
+ return -1;
+
+ for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) {
+ if (ibmvfc_register_scsi_channel(vhost, i)) {
+ for (j = i; j > 0; j--)
+ ibmvfc_deregister_scsi_channel(vhost, j - 1);
+ kfree(vhost->scsi_scrqs.scrqs);
+ vhost->scsi_scrqs.scrqs = NULL;
+ vhost->scsi_scrqs.active_queues = 0;
+ LEAVE;
+ return -1;
+ }
+ }
+
+ LEAVE;
+ return 0;
+}
+
+static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
+{
+ int i;
+
+ ENTER;
+ if (!vhost->scsi_scrqs.scrqs)
+ return;
+
+ for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++)
+ ibmvfc_deregister_scsi_channel(vhost, i);
+
+ kfree(vhost->scsi_scrqs.scrqs);
+ vhost->scsi_scrqs.scrqs = NULL;
+ vhost->scsi_scrqs.active_queues = 0;
+ LEAVE;
+}
+
/**
* ibmvfc_free_mem - Free memory for vhost
* @vhost: ibmvfc host struct
@@ -5239,6 +5361,12 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
goto remove_shost;
}
+ if (vhost->mq_enabled) {
+ rc = ibmvfc_init_sub_crqs(vhost);
+ if (rc)
+ dev_warn(dev, "Failed to allocate Sub-CRQs. rc=%d\n", rc);
+ }
+
if (shost_to_fc_host(shost)->rqst_q)
blk_queue_max_segments(shost_to_fc_host(shost)->rqst_q, 1);
dev_set_drvdata(dev, vhost);
@@ -5296,6 +5424,7 @@ static int ibmvfc_remove(struct vio_dev *vdev)
ibmvfc_purge_requests(vhost, DID_ERROR);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
ibmvfc_free_event_pool(vhost);
+ ibmvfc_release_sub_crqs(vhost);
ibmvfc_free_mem(vhost);
spin_lock(&ibmvfc_driver_lock);
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h
index b3cd35cbf067..986ce4530382 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.h
+++ b/drivers/scsi/ibmvscsi/ibmvfc.h
@@ -838,6 +838,7 @@ struct ibmvfc_host {
mempool_t *tgt_pool;
struct ibmvfc_crq_queue crq;
struct ibmvfc_async_crq_queue async_crq;
+ struct ibmvfc_scsi_channels scsi_scrqs;
struct ibmvfc_npiv_login login_info;
union ibmvfc_npiv_login_data *login_buf;
dma_addr_t login_buf_dma;
--
2.27.0
Turn on MQ by default and set sane values for the upper limit on hw
queues for the scsi host, and number of hw scsi channels to request from
the partner VIOS.
Signed-off-by: Tyrel Datwyler <[email protected]>
Reviewed-by: Brian King <[email protected]>
---
drivers/scsi/ibmvscsi/ibmvfc.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h
index 980eb9afe93a..93c234a5512d 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.h
+++ b/drivers/scsi/ibmvscsi/ibmvfc.h
@@ -41,9 +41,9 @@
#define IBMVFC_DEFAULT_LOG_LEVEL 2
#define IBMVFC_MAX_CDB_LEN 16
#define IBMVFC_CLS3_ERROR 0
-#define IBMVFC_MQ 0
-#define IBMVFC_SCSI_CHANNELS 0
-#define IBMVFC_SCSI_HW_QUEUES 1
+#define IBMVFC_MQ 1
+#define IBMVFC_SCSI_CHANNELS 8
+#define IBMVFC_SCSI_HW_QUEUES 16
#define IBMVFC_MIG_NO_SUB_TO_CRQ 0
#define IBMVFC_MIG_NO_N_TO_M 0
--
2.27.0
On 12/2/20 8:07 PM, Tyrel Datwyler wrote:
> @@ -4983,6 +4993,118 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost)
> return retrc;
> }
>
> +static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
> + int index)
> +{
> + struct device *dev = vhost->dev;
> + struct vio_dev *vdev = to_vio_dev(dev);
> + struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
> + int rc = -ENOMEM;
> +
> + ENTER;
> +
> + scrq->msgs = (struct ibmvfc_sub_crq *)get_zeroed_page(GFP_KERNEL);
> + if (!scrq->msgs)
> + return rc;
> +
> + scrq->size = PAGE_SIZE / sizeof(*scrq->msgs);
> + scrq->msg_token = dma_map_single(dev, scrq->msgs, PAGE_SIZE,
> + DMA_BIDIRECTIONAL);
> +
> + if (dma_mapping_error(dev, scrq->msg_token))
> + goto dma_map_failed;
> +
> + rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,
> + &scrq->cookie, &scrq->hw_irq);
> +
> + if (rc) {
> + dev_warn(dev, "Error registering sub-crq: %d\n", rc);
> + if (rc == H_PARAMETER)
> + dev_warn_once(dev, "Firmware may not support MQ\n");
> + goto reg_failed;
> + }
> +
> + scrq->hwq_id = index;
> + scrq->vhost = vhost;
> +
> + LEAVE;
> + return 0;
> +
> +reg_failed:
> + dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
> +dma_map_failed:
> + free_page((unsigned long)scrq->msgs);
> + LEAVE;
> + return rc;
> +}
> +
> +static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
> +{
> + struct device *dev = vhost->dev;
> + struct vio_dev *vdev = to_vio_dev(dev);
> + struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
> + long rc;
> +
> + ENTER;
> +
> + do {
> + rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address,
> + scrq->cookie);
> + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
> +
> + if (rc)
> + dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);
> +
> + dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
> + free_page((unsigned long)scrq->msgs);
> + LEAVE;
> +}
> +
> +static int ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
> +{
> + int i, j;
> +
> + ENTER;
> +
> + vhost->scsi_scrqs.scrqs = kcalloc(IBMVFC_SCSI_HW_QUEUES,
> + sizeof(*vhost->scsi_scrqs.scrqs),
> + GFP_KERNEL);
> + if (!vhost->scsi_scrqs.scrqs)
> + return -1;
> +
> + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) {
> + if (ibmvfc_register_scsi_channel(vhost, i)) {
> + for (j = i; j > 0; j--)
> + ibmvfc_deregister_scsi_channel(vhost, j - 1);
> + kfree(vhost->scsi_scrqs.scrqs);
> + vhost->scsi_scrqs.scrqs = NULL;
> + vhost->scsi_scrqs.active_queues = 0;
> + LEAVE;
> + return -1;
> + }
> + }
> +
> + LEAVE;
> + return 0;
> +}
> +
> +static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
> +{
> + int i;
> +
> + ENTER;
> + if (!vhost->scsi_scrqs.scrqs)
> + return;
> +
> + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++)
> + ibmvfc_deregister_scsi_channel(vhost, i);
> +
> + kfree(vhost->scsi_scrqs.scrqs);
> + vhost->scsi_scrqs.scrqs = NULL;
> + vhost->scsi_scrqs.active_queues = 0;
> + LEAVE;
> +}
> +
> /**
> * ibmvfc_free_mem - Free memory for vhost
> * @vhost: ibmvfc host struct
> @@ -5239,6 +5361,12 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
> goto remove_shost;
> }
>
> + if (vhost->mq_enabled) {
> + rc = ibmvfc_init_sub_crqs(vhost);
> + if (rc)
> + dev_warn(dev, "Failed to allocate Sub-CRQs. rc=%d\n", rc);
So, I think if you end up down this path, you will have:
vhost->scsi_scrqs.scrqs == NULL
vhost->scsi_scrqs.active_queues == 0
And you proceed with discovery. You will proceed with enquiry and channel setup.
Then, I think you could end up in queuecommand doing this:
evt->hwq = hwq % vhost->scsi_scrqs.active_queues;
And that is a divide by zero...
I wonder if it would be better in this scenario where registering the sub crqs fails,
if you just did:
vhost->do_enquiry = 0;
vhost->mq_enabled = 0;
vhost->using_channels = 0;
It looks like you only try to allocate the subcrqs in probe, so if that fails, we'd
never end up using mq, so just disabling in this case seems reasonable.
Thanks,
Brian
--
Brian King
Power Linux I/O
IBM Linux Technology Center
On 12/4/20 6:47 AM, Brian King wrote:
> On 12/2/20 8:07 PM, Tyrel Datwyler wrote:
>> @@ -4983,6 +4993,118 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost)
>> return retrc;
>> }
>>
>> +static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
>> + int index)
>> +{
>> + struct device *dev = vhost->dev;
>> + struct vio_dev *vdev = to_vio_dev(dev);
>> + struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
>> + int rc = -ENOMEM;
>> +
>> + ENTER;
>> +
>> + scrq->msgs = (struct ibmvfc_sub_crq *)get_zeroed_page(GFP_KERNEL);
>> + if (!scrq->msgs)
>> + return rc;
>> +
>> + scrq->size = PAGE_SIZE / sizeof(*scrq->msgs);
>> + scrq->msg_token = dma_map_single(dev, scrq->msgs, PAGE_SIZE,
>> + DMA_BIDIRECTIONAL);
>> +
>> + if (dma_mapping_error(dev, scrq->msg_token))
>> + goto dma_map_failed;
>> +
>> + rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,
>> + &scrq->cookie, &scrq->hw_irq);
>> +
>> + if (rc) {
>> + dev_warn(dev, "Error registering sub-crq: %d\n", rc);
>> + if (rc == H_PARAMETER)
>> + dev_warn_once(dev, "Firmware may not support MQ\n");
>> + goto reg_failed;
>> + }
>> +
>> + scrq->hwq_id = index;
>> + scrq->vhost = vhost;
>> +
>> + LEAVE;
>> + return 0;
>> +
>> +reg_failed:
>> + dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
>> +dma_map_failed:
>> + free_page((unsigned long)scrq->msgs);
>> + LEAVE;
>> + return rc;
>> +}
>> +
>> +static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
>> +{
>> + struct device *dev = vhost->dev;
>> + struct vio_dev *vdev = to_vio_dev(dev);
>> + struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
>> + long rc;
>> +
>> + ENTER;
>> +
>> + do {
>> + rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address,
>> + scrq->cookie);
>> + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
>> +
>> + if (rc)
>> + dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);
>> +
>> + dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL);
>> + free_page((unsigned long)scrq->msgs);
>> + LEAVE;
>> +}
>> +
>> +static int ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
>> +{
>> + int i, j;
>> +
>> + ENTER;
>> +
>> + vhost->scsi_scrqs.scrqs = kcalloc(IBMVFC_SCSI_HW_QUEUES,
>> + sizeof(*vhost->scsi_scrqs.scrqs),
>> + GFP_KERNEL);
>> + if (!vhost->scsi_scrqs.scrqs)
>> + return -1;
>> +
>> + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) {
>> + if (ibmvfc_register_scsi_channel(vhost, i)) {
>> + for (j = i; j > 0; j--)
>> + ibmvfc_deregister_scsi_channel(vhost, j - 1);
>> + kfree(vhost->scsi_scrqs.scrqs);
>> + vhost->scsi_scrqs.scrqs = NULL;
>> + vhost->scsi_scrqs.active_queues = 0;
>> + LEAVE;
>> + return -1;
>> + }
>> + }
>> +
>> + LEAVE;
>> + return 0;
>> +}
>> +
>> +static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
>> +{
>> + int i;
>> +
>> + ENTER;
>> + if (!vhost->scsi_scrqs.scrqs)
>> + return;
>> +
>> + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++)
>> + ibmvfc_deregister_scsi_channel(vhost, i);
>> +
>> + kfree(vhost->scsi_scrqs.scrqs);
>> + vhost->scsi_scrqs.scrqs = NULL;
>> + vhost->scsi_scrqs.active_queues = 0;
>> + LEAVE;
>> +}
>> +
>> /**
>> * ibmvfc_free_mem - Free memory for vhost
>> * @vhost: ibmvfc host struct
>> @@ -5239,6 +5361,12 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
>> goto remove_shost;
>> }
>>
>> + if (vhost->mq_enabled) {
>> + rc = ibmvfc_init_sub_crqs(vhost);
>> + if (rc)
>> + dev_warn(dev, "Failed to allocate Sub-CRQs. rc=%d\n", rc);
>
> So, I think if you end up down this path, you will have:
>
> vhost->scsi_scrqs.scrqs == NULL
> vhost->scsi_scrqs.active_queues == 0
>
> And you proceed with discovery. You will proceed with enquiry and channel setup.
> Then, I think you could end up in queuecommand doing this
>
> evt->hwq = hwq % vhost->scsi_scrqs.active_queues;
>
> And that is a divide by zero...
Actually, we would bite the dust earlier than that but it requires the sub-crq
allocation to fail for a reason other than lack of firmware support. In the no
firmware support case the VIOS doesn't report channel support and we skip the
enquiry and setup steps. However, in the case where there is support and
allocation fails we would dereference a NULL pointer trying to write the channel
sub-crq handles into the channel_setup MAD.
>
> I wonder if it would be better in this scenario where registering the sub crqs fails,
> if you just did:
>
> vhost->do_enquiry = 0;
> vhost->mq_enabled = 0;
> vhost->using_channels = 0;
>
> It looks like you only try to allocate the subcrqs in probe, so if that fails, we'd
> never end up using mq, so just disabling in this case seems reasonable.
This breaks migration from legacy to a target with channel support. It appears
that migration for that case is already broken anyways. Need to rethink sub-crq
setup. Maybe best to actually do it during the negoation steps instead of in probe.
-Tyrel
>
> Thanks,
>
> Brian
>