Subject: [PATCH 00/13] Introduce support for multiqueue (MQ) in

Hi Martin,

This cover letter describes the feature: add support for multiqueue (MQ)
to fnic driver.

Background: The Virtual Interface Card (VIC) firmware exposes several
queues that can be configured for sending IOs and receiving IO
responses. Unified Computing System Manager (UCSM) and Intersight
Manager (IMM) allows users to configure the number of queues to be
used for IOs.

The number of IO queues to be used is stored in a configuration file
by the VIC firmware. The fNIC driver reads the configuration file and sets
the number of queues to be used. Previously, the driver was hard-coded
to use only one queue. With this set of changes, the fNIC driver will
configure itself to use multiple queues. This feature takes advantage of
the block multiqueue layer to parallelize IOs being sent out of the VIC
card.

Here's a brief description of some of the salient patches:

- vnic_scsi.h needs to be in sync with VIC firmware to be able to read
the number of queues from the firmware config file. A patch has been
created for this.
- In an environment with many fnics (like we see in our customer
environments), it is hard to distinguish which fnic is printing logs.
Therefore, an fnic number has been included in the logs.
- read the number of queues from the firmware config file.
- include definitions in fnic.h to support multiqueue.
- modify the interrupt service routines (ISRs) to read from the
correct registers. The numbers that are used here come from discussions
with the VIC firmware team.
- track IO statistics for different queues.
- remove usage of host_lock, and only use fnic_lock in the fnic driver.
- use a hardware queue based spinlock to protect io_req.
- replace the hard-coded zeroth queue with a hardware queue number.
This presents a bulk of the changes.
- modify the definition of fnic_queuecommand to accept multiqueue tags.
- improve log messages, and indicate fnic number and multiqueue tags for
effective debugging.

Even though the patches have been made into a series, some patches are
heavier than others.
But, every effort has been made to keep the purpose of each patch as
a single-purpose, and to compile cleanly.

This patchset has been tested as a whole. Therefore, the tested-by fields
have been added only to the last two patches
in the set. All the individual patches compile cleanly. However,
I've refrained from adding tested-by to
most of the patches, so as to not mislead the reviewer/reader.

A brief note on the unit tests:

1. Increase number of queues to 64. Load driver. Run IOs via Medusa.
12+ hour run successful.
2. Configure multipathing, and run link flaps on single link.
IOs drop briefly, but pick up as expected.
3. Configure multipathing, and run link flaps on two links, with a
30 second delay in between. IOs drop briefly, but pick up as expected.

Repeat the above tests with single queue. All tests were successful.

Please consider this patch series for the next merge window.

Thanks and regards,
Karan

Karan Tilak Kumar (13):
scsi: fnic: Modify definitions to sync with VIC firmware
scsi: fnic: Add and use fnic number
scsi: fnic: Add and improve log messages
scsi: fnic: Rename wq_copy to hw_copy_wq
scsi: fnic: Get copy workqueue count and interrupt mode from config
scsi: fnic: Refactor and redefine fnic.h for multiqueue
scsi: fnic: Modify ISRs to support multiqueue(MQ)
scsi: fnic: Define stats to track multiqueue (MQ) IOs
scsi: fnic: Remove usage of host_lock
scsi: fnic: Add support for multiqueue (MQ) in fnic_main.c
scsi: fnic: Use fnic_lock to protect fnic structures in queuecommand
scsi: fnic: Add support for multiqueue (MQ) in fnic driver
scsi: fnic: Improve logs and add support for multiqueue (MQ)

drivers/scsi/fnic/fnic.h | 41 +-
drivers/scsi/fnic/fnic_debugfs.c | 2 +-
drivers/scsi/fnic/fnic_fcs.c | 36 +-
drivers/scsi/fnic/fnic_io.h | 2 +
drivers/scsi/fnic/fnic_isr.c | 166 +++++--
drivers/scsi/fnic/fnic_main.c | 156 ++++--
drivers/scsi/fnic/fnic_res.c | 48 +-
drivers/scsi/fnic/fnic_scsi.c | 797 ++++++++++++++++++-------------
drivers/scsi/fnic/fnic_stats.h | 3 +
drivers/scsi/fnic/fnic_trace.c | 11 +
drivers/scsi/fnic/vnic_dev.c | 4 +
drivers/scsi/fnic/vnic_scsi.h | 13 +-
12 files changed, 835 insertions(+), 444 deletions(-)

--
2.31.1


Subject: [PATCH 08/13] scsi: fnic: Define stats to track multiqueue (MQ) IOs

Define an array to track IOs for the different queues,
print the IO stats in fnic get stats data.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic_stats.h | 2 ++
drivers/scsi/fnic/fnic_trace.c | 11 +++++++++++
2 files changed, 13 insertions(+)

diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h
index 07d1556e3c32..9d7f98c452dd 100644
--- a/drivers/scsi/fnic/fnic_stats.h
+++ b/drivers/scsi/fnic/fnic_stats.h
@@ -2,6 +2,7 @@
/* Copyright 2013 Cisco Systems, Inc. All rights reserved. */
#ifndef _FNIC_STATS_H_
#define _FNIC_STATS_H_
+#define FNIC_MQ_MAX_QUEUES 64

struct stats_timestamps {
struct timespec64 last_reset_time;
@@ -26,6 +27,7 @@ struct io_path_stats {
atomic64_t io_btw_10000_to_30000_msec;
atomic64_t io_greater_than_30000_msec;
atomic64_t current_max_io_time;
+ atomic64_t ios[FNIC_MQ_MAX_QUEUES];
};

struct abort_stats {
diff --git a/drivers/scsi/fnic/fnic_trace.c b/drivers/scsi/fnic/fnic_trace.c
index be0d7c57b242..aaa4ea02fb7c 100644
--- a/drivers/scsi/fnic/fnic_trace.c
+++ b/drivers/scsi/fnic/fnic_trace.c
@@ -204,6 +204,7 @@ int fnic_get_stats_data(struct stats_debug_info *debug,
int len = 0;
int buf_size = debug->buf_size;
struct timespec64 val1, val2;
+ int i = 0;

ktime_get_real_ts64(&val1);
len = scnprintf(debug->debug_buffer + len, buf_size - len,
@@ -266,6 +267,16 @@ int fnic_get_stats_data(struct stats_debug_info *debug,
(u64)atomic64_read(&stats->io_stats.io_btw_10000_to_30000_msec),
(u64)atomic64_read(&stats->io_stats.io_greater_than_30000_msec));

+ len += scnprintf(debug->debug_buffer + len, buf_size - len,
+ "------------------------------------------\n"
+ "\t\tIO Queues and cumulative IOs\n"
+ "------------------------------------------\n");
+
+ for (i = 0; i < FNIC_MQ_MAX_QUEUES; i++) {
+ len += scnprintf(debug->debug_buffer + len, buf_size - len,
+ "Q:%d -> %lld\n", i, (u64)atomic64_read(&stats->io_stats.ios[i]));
+ }
+
len += scnprintf(debug->debug_buffer + len, buf_size - len,
"\nCurrent Max IO time : %lld\n",
(u64)atomic64_read(&stats->io_stats.current_max_io_time));
--
2.31.1

Subject: [PATCH 09/13] scsi: fnic: Remove usage of host_lock

Remove usage of host_lock.
Replace with fnic_lock, where necessary.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic_scsi.c | 27 ++++++---------------------
1 file changed, 6 insertions(+), 21 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index f32781f8fdd0..9a1beb3e7269 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -170,17 +170,14 @@ __fnic_set_state_flags(struct fnic *fnic, unsigned long st_flags,
unsigned long clearbits)
{
unsigned long flags = 0;
- unsigned long host_lock_flags = 0;

spin_lock_irqsave(&fnic->fnic_lock, flags);
- spin_lock_irqsave(fnic->lport->host->host_lock, host_lock_flags);

if (clearbits)
fnic->state_flags &= ~st_flags;
else
fnic->state_flags |= st_flags;

- spin_unlock_irqrestore(fnic->lport->host->host_lock, host_lock_flags);
spin_unlock_irqrestore(&fnic->fnic_lock, flags);

return;
@@ -479,12 +476,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)

atomic_inc(&fnic->in_flight);

- /*
- * Release host lock, use driver resource specific locks from here.
- * Don't re-enable interrupts in case they were disabled prior to the
- * caller disabling them.
- */
- spin_unlock(lp->host->host_lock);
fnic_priv(sc)->state = FNIC_IOREQ_NOT_INITED;
fnic_priv(sc)->flags = FNIC_NO_FLAGS;

@@ -569,8 +560,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
mempool_free(io_req, fnic->io_req_pool);
}
atomic_dec(&fnic->in_flight);
- /* acquire host lock before returning to SCSI */
- spin_lock(lp->host->host_lock);
return ret;
} else {
atomic64_inc(&fnic_stats->io_stats.active_ios);
@@ -598,8 +587,6 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
spin_unlock_irqrestore(io_lock, flags);

atomic_dec(&fnic->in_flight);
- /* acquire host lock before returning to SCSI */
- spin_lock(lp->host->host_lock);
return ret;
}

@@ -1477,18 +1464,17 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag,
struct fnic_io_req *io_req)
{
struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
- struct Scsi_Host *host = fnic->lport->host;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
unsigned long flags;

- spin_lock_irqsave(host->host_lock, flags);
+ spin_lock_irqsave(&fnic->fnic_lock, flags);
if (unlikely(fnic_chk_state_flags_locked(fnic,
FNIC_FLAGS_IO_BLOCKED))) {
- spin_unlock_irqrestore(host->host_lock, flags);
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
return 1;
} else
atomic_inc(&fnic->in_flight);
- spin_unlock_irqrestore(host->host_lock, flags);
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);

spin_lock_irqsave(&fnic->wq_copy_lock[0], flags);

@@ -1923,20 +1909,19 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic,
struct fnic_io_req *io_req)
{
struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
- struct Scsi_Host *host = fnic->lport->host;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
struct scsi_lun fc_lun;
int ret = 0;
unsigned long intr_flags;

- spin_lock_irqsave(host->host_lock, intr_flags);
+ spin_lock_irqsave(&fnic->fnic_lock, intr_flags);
if (unlikely(fnic_chk_state_flags_locked(fnic,
FNIC_FLAGS_IO_BLOCKED))) {
- spin_unlock_irqrestore(host->host_lock, intr_flags);
+ spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags);
return FAILED;
} else
atomic_inc(&fnic->in_flight);
- spin_unlock_irqrestore(host->host_lock, intr_flags);
+ spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags);

spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags);

--
2.31.1

Subject: [PATCH 05/13] scsi: fnic: Get copy workqueue count and interrupt mode from config

Get the copy workqueue count and interrupt mode from
the configuration. The config can be changed via UCSM.
Add logs to print the interrupt mode and copy workqueue count.
Add logs to print the vNIC resources.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic_res.c | 42 ++++++++++++++++++++++++++++++------
1 file changed, 36 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_res.c b/drivers/scsi/fnic/fnic_res.c
index 109316cc4ad9..33dd27f6f24e 100644
--- a/drivers/scsi/fnic/fnic_res.c
+++ b/drivers/scsi/fnic/fnic_res.c
@@ -57,6 +57,8 @@ int fnic_get_vnic_config(struct fnic *fnic)
GET_CONFIG(port_down_timeout);
GET_CONFIG(port_down_io_retries);
GET_CONFIG(luns_per_tgt);
+ GET_CONFIG(intr_mode);
+ GET_CONFIG(wq_copy_count);

c->wq_enet_desc_count =
min_t(u32, VNIC_FNIC_WQ_DESCS_MAX,
@@ -131,6 +133,12 @@ int fnic_get_vnic_config(struct fnic *fnic)
c->intr_timer = min_t(u16, VNIC_INTR_TIMER_MAX, c->intr_timer);
c->intr_timer_type = c->intr_timer_type;

+ /* for older firmware, GET_CONFIG will not return anything */
+ if (c->wq_copy_count == 0)
+ c->wq_copy_count = 1;
+
+ c->wq_copy_count = min_t(u16, FNIC_WQ_COPY_MAX, c->wq_copy_count);
+
shost_printk(KERN_INFO, fnic->lport->host,
"vNIC MAC addr %pM "
"wq/wq_copy/rq %d/%d/%d\n",
@@ -161,6 +169,10 @@ int fnic_get_vnic_config(struct fnic *fnic)
shost_printk(KERN_INFO, fnic->lport->host,
"vNIC port dn io retries %d port dn timeout %d\n",
c->port_down_io_retries, c->port_down_timeout);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC wq_copy_count: %d\n", c->wq_copy_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC intr mode: %d\n", c->intr_mode);

return 0;
}
@@ -187,12 +199,25 @@ int fnic_set_nic_config(struct fnic *fnic, u8 rss_default_cpu,
void fnic_get_res_counts(struct fnic *fnic)
{
fnic->wq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_WQ);
- fnic->raw_wq_count = fnic->wq_count - 1;
- fnic->wq_copy_count = fnic->wq_count - fnic->raw_wq_count;
+ fnic->raw_wq_count = 1;
+ fnic->wq_copy_count = fnic->config.wq_copy_count;
fnic->rq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_RQ);
fnic->cq_count = vnic_dev_get_res_count(fnic->vdev, RES_TYPE_CQ);
fnic->intr_count = vnic_dev_get_res_count(fnic->vdev,
RES_TYPE_INTR_CTRL);
+
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC fw resources wq_count: %d\n", fnic->wq_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC fw resources raw_wq_count: %d\n", fnic->raw_wq_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC fw resources wq_copy_count: %d\n", fnic->wq_copy_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC fw resources rq_count: %d\n", fnic->rq_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC fw resources cq_count: %d\n", fnic->cq_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC fw resources intr_count: %d\n", fnic->intr_count);
}

void fnic_free_vnic_resources(struct fnic *fnic)
@@ -234,10 +259,15 @@ int fnic_alloc_vnic_resources(struct fnic *fnic)
intr_mode == VNIC_DEV_INTR_MODE_MSIX ?
"MSI-X" : "unknown");

- shost_printk(KERN_INFO, fnic->lport->host, "vNIC resources avail: "
- "wq %d cp_wq %d raw_wq %d rq %d cq %d intr %d\n",
- fnic->wq_count, fnic->wq_copy_count, fnic->raw_wq_count,
- fnic->rq_count, fnic->cq_count, fnic->intr_count);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC resources avail: wq %d cp_wq %d raw_wq %d rq %d",
+ fnic->wq_count, fnic->wq_copy_count,
+ fnic->raw_wq_count, fnic->rq_count);
+
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "vNIC resources avail: cq %d intr %d cpy-wq desc count %d\n",
+ fnic->cq_count, fnic->intr_count,
+ fnic->config.wq_copy_desc_count);

/* Allocate Raw WQ used for FCS frames */
for (i = 0; i < fnic->raw_wq_count; i++) {
--
2.31.1

Subject: [PATCH 04/13] scsi: fnic: Rename wq_copy to hw_copy_wq

Rename wq_copy to hw_copy_wq to accurately describe
the copy workqueue. This will also help distinguish
this data structure from software data structures
that can be introduced.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic.h | 2 +-
drivers/scsi/fnic/fnic_isr.c | 2 +-
drivers/scsi/fnic/fnic_main.c | 8 ++++----
drivers/scsi/fnic/fnic_res.c | 6 +++---
drivers/scsi/fnic/fnic_scsi.c | 12 ++++++------
5 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
index c6c549c633b1..9fd01a867788 100644
--- a/drivers/scsi/fnic/fnic.h
+++ b/drivers/scsi/fnic/fnic.h
@@ -301,7 +301,7 @@ struct fnic {
/*** FIP related data members -- end ***/

/* copy work queue cache line section */
- ____cacheline_aligned struct vnic_wq_copy wq_copy[FNIC_WQ_COPY_MAX];
+ ____cacheline_aligned struct vnic_wq_copy hw_copy_wq[FNIC_WQ_COPY_MAX];
/* completion queue cache line section */
____cacheline_aligned struct vnic_cq cq[FNIC_CQ_MAX];

diff --git a/drivers/scsi/fnic/fnic_isr.c b/drivers/scsi/fnic/fnic_isr.c
index 8896758fed8c..dff9689023e4 100644
--- a/drivers/scsi/fnic/fnic_isr.c
+++ b/drivers/scsi/fnic/fnic_isr.c
@@ -224,7 +224,7 @@ int fnic_set_intr_mode(struct fnic *fnic)
{
unsigned int n = ARRAY_SIZE(fnic->rq);
unsigned int m = ARRAY_SIZE(fnic->wq);
- unsigned int o = ARRAY_SIZE(fnic->wq_copy);
+ unsigned int o = ARRAY_SIZE(fnic->hw_copy_wq);

/*
* Set interrupt mode (INTx, MSI, MSI-X) depending
diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
index ea7b1ba27ac7..887e3e168579 100644
--- a/drivers/scsi/fnic/fnic_main.c
+++ b/drivers/scsi/fnic/fnic_main.c
@@ -357,7 +357,7 @@ void fnic_log_q_error(struct fnic *fnic)
}

for (i = 0; i < fnic->wq_copy_count; i++) {
- error_status = ioread32(&fnic->wq_copy[i].ctrl->error_status);
+ error_status = ioread32(&fnic->hw_copy_wq[i].ctrl->error_status);
if (error_status)
shost_printk(KERN_ERR, fnic->lport->host,
"CWQ[%d] error_status"
@@ -494,7 +494,7 @@ static int fnic_cleanup(struct fnic *fnic)
return err;
}
for (i = 0; i < fnic->wq_copy_count; i++) {
- err = vnic_wq_copy_disable(&fnic->wq_copy[i]);
+ err = vnic_wq_copy_disable(&fnic->hw_copy_wq[i]);
if (err)
return err;
}
@@ -510,7 +510,7 @@ static int fnic_cleanup(struct fnic *fnic)
for (i = 0; i < fnic->rq_count; i++)
vnic_rq_clean(&fnic->rq[i], fnic_free_rq_buf);
for (i = 0; i < fnic->wq_copy_count; i++)
- vnic_wq_copy_clean(&fnic->wq_copy[i],
+ vnic_wq_copy_clean(&fnic->hw_copy_wq[i],
fnic_wq_copy_cleanup_handler);

for (i = 0; i < fnic->cq_count; i++)
@@ -901,7 +901,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
for (i = 0; i < fnic->raw_wq_count; i++)
vnic_wq_enable(&fnic->wq[i]);
for (i = 0; i < fnic->wq_copy_count; i++)
- vnic_wq_copy_enable(&fnic->wq_copy[i]);
+ vnic_wq_copy_enable(&fnic->hw_copy_wq[i]);

fc_fabric_login(lp);

diff --git a/drivers/scsi/fnic/fnic_res.c b/drivers/scsi/fnic/fnic_res.c
index a1c9cfcace7f..109316cc4ad9 100644
--- a/drivers/scsi/fnic/fnic_res.c
+++ b/drivers/scsi/fnic/fnic_res.c
@@ -203,7 +203,7 @@ void fnic_free_vnic_resources(struct fnic *fnic)
vnic_wq_free(&fnic->wq[i]);

for (i = 0; i < fnic->wq_copy_count; i++)
- vnic_wq_copy_free(&fnic->wq_copy[i]);
+ vnic_wq_copy_free(&fnic->hw_copy_wq[i]);

for (i = 0; i < fnic->rq_count; i++)
vnic_rq_free(&fnic->rq[i]);
@@ -250,7 +250,7 @@ int fnic_alloc_vnic_resources(struct fnic *fnic)

/* Allocate Copy WQs used for SCSI IOs */
for (i = 0; i < fnic->wq_copy_count; i++) {
- err = vnic_wq_copy_alloc(fnic->vdev, &fnic->wq_copy[i],
+ err = vnic_wq_copy_alloc(fnic->vdev, &fnic->hw_copy_wq[i],
(fnic->raw_wq_count + i),
fnic->config.wq_copy_desc_count,
sizeof(struct fcpio_host_req));
@@ -357,7 +357,7 @@ int fnic_alloc_vnic_resources(struct fnic *fnic)
}

for (i = 0; i < fnic->wq_copy_count; i++) {
- vnic_wq_copy_init(&fnic->wq_copy[i],
+ vnic_wq_copy_init(&fnic->hw_copy_wq[i],
0 /* cq_index 0 - always */,
error_interrupt_enable,
error_interrupt_offset);
diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index 9761b2c9db48..3498a8d670b1 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -193,7 +193,7 @@ __fnic_set_state_flags(struct fnic *fnic, unsigned long st_flags,
*/
int fnic_fw_reset_handler(struct fnic *fnic)
{
- struct vnic_wq_copy *wq = &fnic->wq_copy[0];
+ struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
int ret = 0;
unsigned long flags;

@@ -246,7 +246,7 @@ int fnic_fw_reset_handler(struct fnic *fnic)
*/
int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id)
{
- struct vnic_wq_copy *wq = &fnic->wq_copy[0];
+ struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
enum fcpio_flogi_reg_format_type format;
struct fc_lport *lp = fnic->lport;
u8 gw_mac[ETH_ALEN];
@@ -551,7 +551,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
fnic_priv(sc)->flags |= FNIC_IO_INITIALIZED;

/* create copy wq desc and enqueue it */
- wq = &fnic->wq_copy[0];
+ wq = &fnic->hw_copy_wq[0];
ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count);
if (ret) {
/*
@@ -782,7 +782,7 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic,
u64 *ox_id_tag = (u64 *)(void *)desc;

/* mark the ack state */
- wq = &fnic->wq_copy[cq_index - fnic->raw_wq_count - fnic->rq_count];
+ wq = &fnic->hw_copy_wq[cq_index - fnic->raw_wq_count - fnic->rq_count];
spin_lock_irqsave(&fnic->wq_copy_lock[0], flags);

fnic->fnic_stats.misc_stats.last_ack_time = jiffies;
@@ -1486,7 +1486,7 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag,
u32 task_req, u8 *fc_lun,
struct fnic_io_req *io_req)
{
- struct vnic_wq_copy *wq = &fnic->wq_copy[0];
+ struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
struct Scsi_Host *host = fnic->lport->host;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
unsigned long flags;
@@ -1932,7 +1932,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic,
struct scsi_cmnd *sc,
struct fnic_io_req *io_req)
{
- struct vnic_wq_copy *wq = &fnic->wq_copy[0];
+ struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
struct Scsi_Host *host = fnic->lport->host;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
struct scsi_lun fc_lun;
--
2.31.1

Subject: [PATCH 06/13] scsi: fnic: Refactor and redefine fnic.h for multiqueue

Refactor and re-define values in fnic.h to implement
multiqueue(MQ) functionality.

VIC firmware allows fnic to create up to 64 copy
workqueues. Update the copy workqueue max to 64.
Modify the interrupt index to be in sync with the firmware
to support MQ.
Add irq number to the MSIX entry.
Define a software workqueue table to track the status of
io_reqs. Define a base for the copy workqueue.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic.h | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
index 9fd01a867788..f4390fc96323 100644
--- a/drivers/scsi/fnic/fnic.h
+++ b/drivers/scsi/fnic/fnic.h
@@ -163,12 +163,21 @@ do { \
#define FNIC_MAIN_NOTE(kern_level, host, fmt, args...) \
shost_printk(kern_level, host, fmt, ##args)

+#define FNIC_WQ_COPY_MAX 64
+#define FNIC_WQ_MAX 1
+#define FNIC_RQ_MAX 1
+#define FNIC_CQ_MAX (FNIC_WQ_COPY_MAX + FNIC_WQ_MAX + FNIC_RQ_MAX)
+#define FNIC_DFLT_IO_COMPLETIONS 256
+
+#define FNIC_MQ_CQ_INDEX 2
+
extern const char *fnic_state_str[];

enum fnic_intx_intr_index {
FNIC_INTX_WQ_RQ_COPYWQ,
- FNIC_INTX_ERR,
+ FNIC_INTX_DUMMY,
FNIC_INTX_NOTIFY,
+ FNIC_INTX_ERR,
FNIC_INTX_INTR_MAX,
};

@@ -176,7 +185,7 @@ enum fnic_msix_intr_index {
FNIC_MSIX_RQ,
FNIC_MSIX_WQ,
FNIC_MSIX_WQ_COPY,
- FNIC_MSIX_ERR_NOTIFY,
+ FNIC_MSIX_ERR_NOTIFY = FNIC_MSIX_WQ_COPY + FNIC_WQ_COPY_MAX,
FNIC_MSIX_INTR_MAX,
};

@@ -185,6 +194,7 @@ struct fnic_msix_entry {
char devname[IFNAMSIZ + 11];
irqreturn_t (*isr)(int, void *);
void *devid;
+ int irq_num;
};

enum fnic_state {
@@ -194,12 +204,6 @@ enum fnic_state {
FNIC_IN_ETH_TRANS_FC_MODE,
};

-#define FNIC_WQ_COPY_MAX 1
-#define FNIC_WQ_MAX 1
-#define FNIC_RQ_MAX 1
-#define FNIC_CQ_MAX (FNIC_WQ_COPY_MAX + FNIC_WQ_MAX + FNIC_RQ_MAX)
-#define FNIC_DFLT_IO_COMPLETIONS 256
-
struct mempool;

enum fnic_evt {
@@ -214,6 +218,13 @@ struct fnic_event {
enum fnic_evt event;
};

+struct fnic_cpy_wq {
+ unsigned long hw_lock_flags;
+ u16 active_ioreq_count;
+ u16 ioreq_table_size;
+ ____cacheline_aligned struct fnic_io_req **io_req_table;
+};
+
/* Per-instance private data structure */
struct fnic {
int fnic_num;
@@ -283,6 +294,7 @@ struct fnic {
mempool_t *io_sgl_pool[FNIC_SGL_NUM_CACHES];
spinlock_t io_req_lock[FNIC_IO_LOCKS]; /* locks for scsi cmnds */

+ unsigned int cpy_wq_base;
struct work_struct link_work;
struct work_struct frame_work;
struct sk_buff_head frame_queue;
@@ -302,6 +314,8 @@ struct fnic {

/* copy work queue cache line section */
____cacheline_aligned struct vnic_wq_copy hw_copy_wq[FNIC_WQ_COPY_MAX];
+ ____cacheline_aligned struct fnic_cpy_wq sw_copy_wq[FNIC_WQ_COPY_MAX];
+
/* completion queue cache line section */
____cacheline_aligned struct vnic_cq cq[FNIC_CQ_MAX];

--
2.31.1

Subject: [PATCH 07/13] scsi: fnic: Modify ISRs to support multiqueue(MQ)

Modify interrupt service routines for INTx, MSI, and MSI-x
to support multiqueue.
Modify parameter list of fnic_wq_copy_cmpl_handler
to take cq_index.
Modify fnic_cleanup function to use the new
function call of fnic_wq_copy_cmpl_handler.
Refactor code to set
interrupt mode to MSI-x to a new function.
Add a new stat for intx_dummy.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic.h | 2 +-
drivers/scsi/fnic/fnic_isr.c | 164 ++++++++++++++++++++++++---------
drivers/scsi/fnic/fnic_main.c | 4 +-
drivers/scsi/fnic/fnic_scsi.c | 38 +++-----
drivers/scsi/fnic/fnic_stats.h | 1 +
5 files changed, 139 insertions(+), 70 deletions(-)

diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
index f4390fc96323..692175cfa557 100644
--- a/drivers/scsi/fnic/fnic.h
+++ b/drivers/scsi/fnic/fnic.h
@@ -369,7 +369,7 @@ void fnic_scsi_cleanup(struct fc_lport *);
void fnic_scsi_abort_io(struct fc_lport *);
void fnic_empty_scsi_cleanup(struct fc_lport *);
void fnic_exch_mgr_reset(struct fc_lport *, u32, u32);
-int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int);
+int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index);
int fnic_wq_cmpl_handler(struct fnic *fnic, int);
int fnic_flogi_reg_handler(struct fnic *fnic, u32);
void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,
diff --git a/drivers/scsi/fnic/fnic_isr.c b/drivers/scsi/fnic/fnic_isr.c
index dff9689023e4..82e4e3e8ec4b 100644
--- a/drivers/scsi/fnic/fnic_isr.c
+++ b/drivers/scsi/fnic/fnic_isr.c
@@ -38,8 +38,13 @@ static irqreturn_t fnic_isr_legacy(int irq, void *data)
fnic_log_q_error(fnic);
}

+ if (pba & (1 << FNIC_INTX_DUMMY)) {
+ atomic64_inc(&fnic->fnic_stats.misc_stats.intx_dummy);
+ vnic_intr_return_all_credits(&fnic->intr[FNIC_INTX_DUMMY]);
+ }
+
if (pba & (1 << FNIC_INTX_WQ_RQ_COPYWQ)) {
- work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions);
+ work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions, FNIC_MQ_CQ_INDEX);
work_done += fnic_wq_cmpl_handler(fnic, -1);
work_done += fnic_rq_cmpl_handler(fnic, -1);

@@ -60,7 +65,7 @@ static irqreturn_t fnic_isr_msi(int irq, void *data)
fnic->fnic_stats.misc_stats.last_isr_time = jiffies;
atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count);

- work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions);
+ work_done += fnic_wq_copy_cmpl_handler(fnic, io_completions, FNIC_MQ_CQ_INDEX);
work_done += fnic_wq_cmpl_handler(fnic, -1);
work_done += fnic_rq_cmpl_handler(fnic, -1);

@@ -109,12 +114,22 @@ static irqreturn_t fnic_isr_msix_wq_copy(int irq, void *data)
{
struct fnic *fnic = data;
unsigned long wq_copy_work_done = 0;
+ int i;

fnic->fnic_stats.misc_stats.last_isr_time = jiffies;
atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count);

- wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions);
- vnic_intr_return_credits(&fnic->intr[FNIC_MSIX_WQ_COPY],
+ i = irq - fnic->msix[0].irq_num;
+ if (i >= fnic->wq_copy_count + fnic->cpy_wq_base ||
+ i < 0 || fnic->msix[i].irq_num != irq) {
+ for (i = fnic->cpy_wq_base; i < fnic->wq_copy_count + fnic->cpy_wq_base ; i++) {
+ if (fnic->msix[i].irq_num == irq)
+ break;
+ }
+ }
+
+ wq_copy_work_done = fnic_wq_copy_cmpl_handler(fnic, io_completions, i);
+ vnic_intr_return_credits(&fnic->intr[i],
wq_copy_work_done,
1 /* unmask intr */,
1 /* reset intr timer */);
@@ -128,7 +143,7 @@ static irqreturn_t fnic_isr_msix_err_notify(int irq, void *data)
fnic->fnic_stats.misc_stats.last_isr_time = jiffies;
atomic64_inc(&fnic->fnic_stats.misc_stats.isr_count);

- vnic_intr_return_all_credits(&fnic->intr[FNIC_MSIX_ERR_NOTIFY]);
+ vnic_intr_return_all_credits(&fnic->intr[fnic->err_intr_offset]);
fnic_log_q_error(fnic);
fnic_handle_link_event(fnic);

@@ -186,26 +201,30 @@ int fnic_request_intr(struct fnic *fnic)
fnic->msix[FNIC_MSIX_WQ].isr = fnic_isr_msix_wq;
fnic->msix[FNIC_MSIX_WQ].devid = fnic;

- sprintf(fnic->msix[FNIC_MSIX_WQ_COPY].devname,
- "%.11s-scsi-wq", fnic->name);
- fnic->msix[FNIC_MSIX_WQ_COPY].isr = fnic_isr_msix_wq_copy;
- fnic->msix[FNIC_MSIX_WQ_COPY].devid = fnic;
+ for (i = fnic->cpy_wq_base; i < fnic->wq_copy_count + fnic->cpy_wq_base; i++) {
+ sprintf(fnic->msix[i].devname,
+ "%.11s-scsi-wq-%d", fnic->name, i-FNIC_MSIX_WQ_COPY);
+ fnic->msix[i].isr = fnic_isr_msix_wq_copy;
+ fnic->msix[i].devid = fnic;
+ }

- sprintf(fnic->msix[FNIC_MSIX_ERR_NOTIFY].devname,
+ sprintf(fnic->msix[fnic->err_intr_offset].devname,
"%.11s-err-notify", fnic->name);
- fnic->msix[FNIC_MSIX_ERR_NOTIFY].isr =
+ fnic->msix[fnic->err_intr_offset].isr =
fnic_isr_msix_err_notify;
- fnic->msix[FNIC_MSIX_ERR_NOTIFY].devid = fnic;
+ fnic->msix[fnic->err_intr_offset].devid = fnic;
+
+ for (i = 0; i < fnic->intr_count; i++) {
+ fnic->msix[i].irq_num = pci_irq_vector(fnic->pdev, i);

- for (i = 0; i < ARRAY_SIZE(fnic->msix); i++) {
- err = request_irq(pci_irq_vector(fnic->pdev, i),
- fnic->msix[i].isr, 0,
- fnic->msix[i].devname,
- fnic->msix[i].devid);
+ err = request_irq(fnic->msix[i].irq_num,
+ fnic->msix[i].isr, 0,
+ fnic->msix[i].devname,
+ fnic->msix[i].devid);
if (err) {
- shost_printk(KERN_ERR, fnic->lport->host,
- "MSIX: request_irq"
- " failed %d\n", err);
+ FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d request_irq failed with error: %d\n",
+ fnic->fnic_num, __func__, __LINE__, err);
fnic_free_intr(fnic);
break;
}
@@ -220,44 +239,101 @@ int fnic_request_intr(struct fnic *fnic)
return err;
}

-int fnic_set_intr_mode(struct fnic *fnic)
+int fnic_set_intr_mode_msix(struct fnic *fnic)
{
unsigned int n = ARRAY_SIZE(fnic->rq);
unsigned int m = ARRAY_SIZE(fnic->wq);
unsigned int o = ARRAY_SIZE(fnic->hw_copy_wq);
+ unsigned int min_irqs = n + m + 1 + 1; /*rq, raw wq, wq, err*/

/*
- * Set interrupt mode (INTx, MSI, MSI-X) depending
- * system capabilities.
- *
- * Try MSI-X first
- *
* We need n RQs, m WQs, o Copy WQs, n+m+o CQs, and n+m+o+1 INTRs
* (last INTR is used for WQ/RQ errors and notification area)
*/
- if (fnic->rq_count >= n &&
- fnic->raw_wq_count >= m &&
- fnic->wq_copy_count >= o &&
- fnic->cq_count >= n + m + o) {
- int vecs = n + m + o + 1;
-
- if (pci_alloc_irq_vectors(fnic->pdev, vecs, vecs,
- PCI_IRQ_MSIX) == vecs) {
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: rq-array size: %d wq-array size: %d copy-wq array size: %d\n",
+ fnic->fnic_num, __func__, __LINE__, n, m, o);
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: rq_count: %d raw_wq_count: %d wq_copy_count: %d cq_count: %d\n",
+ fnic->fnic_num, __func__, __LINE__, fnic->rq_count, fnic->raw_wq_count,
+ fnic->wq_copy_count, fnic->cq_count);
+
+ if (fnic->rq_count <= n && fnic->raw_wq_count <= m &&
+ fnic->wq_copy_count <= o) {
+ int vec_count = 0;
+ int vecs = fnic->rq_count + fnic->raw_wq_count + fnic->wq_copy_count + 1;
+
+ vec_count = pci_alloc_irq_vectors(fnic->pdev, min_irqs, vecs,
+ PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: allocated %d MSI-X vectors\n",
+ fnic->fnic_num, __func__, __LINE__, vec_count);
+
+ if (vec_count > 0) {
+ if (vec_count < vecs) {
+ FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: interrupts number mismatch: vec_count: %d vecs: %d\n",
+ fnic->fnic_num, __func__, __LINE__, vec_count, vecs);
+ if (vec_count < min_irqs) {
+ FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: no interrupts for copy wq\n",
+ fnic->fnic_num, __func__, __LINE__);
+ return 1;
+ }
+ }
+
fnic->rq_count = n;
fnic->raw_wq_count = m;
- fnic->wq_copy_count = o;
- fnic->wq_count = m + o;
- fnic->cq_count = n + m + o;
- fnic->intr_count = vecs;
- fnic->err_intr_offset = FNIC_MSIX_ERR_NOTIFY;
-
- FNIC_ISR_DBG(KERN_DEBUG, fnic->lport->host,
- "Using MSI-X Interrupts\n");
- vnic_dev_set_intr_mode(fnic->vdev,
- VNIC_DEV_INTR_MODE_MSIX);
+ fnic->cpy_wq_base = fnic->rq_count + fnic->raw_wq_count;
+ fnic->wq_copy_count = vec_count - n - m - 1;
+ fnic->wq_count = fnic->raw_wq_count + fnic->wq_copy_count;
+ if (fnic->cq_count != vec_count - 1) {
+ FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: CQ count: %d does not match MSI-X vector count: %d\n",
+ fnic->fnic_num, __func__, __LINE__, fnic->cq_count, vec_count);
+ fnic->cq_count = vec_count - 1;
+ }
+ fnic->intr_count = vec_count;
+ fnic->err_intr_offset = fnic->rq_count + fnic->wq_count;
+
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: rq_count: %d raw_wq_count: %d cpy_wq_base: %d\n",
+ fnic->fnic_num, __func__, __LINE__, fnic->rq_count,
+ fnic->raw_wq_count, fnic->cpy_wq_base);
+
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: wq_copy_count: %d wq_count: %d cq_count: %d\n",
+ fnic->fnic_num, __func__, __LINE__, fnic->wq_copy_count,
+ fnic->wq_count, fnic->cq_count);
+
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: intr_count: %d err_intr_offset: %u",
+ fnic->fnic_num, __func__, __LINE__, fnic->intr_count,
+ fnic->err_intr_offset);
+
+ vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_MSIX);
+ FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: fnic using MSI-X\n", fnic->fnic_num,
+ __func__, __LINE__);
return 0;
}
}
+ return 1;
+} //fnic_set_intr_mode_msix
+
+int fnic_set_intr_mode(struct fnic *fnic)
+{
+ int ret_status = 0;
+
+ /*
+ * Set interrupt mode (INTx, MSI, MSI-X) depending
+ * system capabilities.
+ *
+ * Try MSI-X first
+ */
+ ret_status = fnic_set_intr_mode_msix(fnic);
+ if (ret_status == 0)
+ return ret_status;

/*
* Next try MSI
diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
index 887e3e168579..66e85754c1d6 100644
--- a/drivers/scsi/fnic/fnic_main.c
+++ b/drivers/scsi/fnic/fnic_main.c
@@ -478,6 +478,7 @@ static int fnic_cleanup(struct fnic *fnic)
{
unsigned int i;
int err;
+ int raw_wq_rq_counts;

vnic_dev_disable(fnic->vdev);
for (i = 0; i < fnic->intr_count; i++)
@@ -497,10 +498,11 @@ static int fnic_cleanup(struct fnic *fnic)
err = vnic_wq_copy_disable(&fnic->hw_copy_wq[i]);
if (err)
return err;
+ raw_wq_rq_counts = fnic->raw_wq_count + fnic->rq_count;
+ fnic_wq_copy_cmpl_handler(fnic, -1, i + raw_wq_rq_counts);
}

/* Clean up completed IOs and FCS frames */
- fnic_wq_copy_cmpl_handler(fnic, io_completions);
fnic_wq_cmpl_handler(fnic, -1);
fnic_rq_cmpl_handler(fnic, -1);

diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index 3498a8d670b1..f32781f8fdd0 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -1303,10 +1303,8 @@ static int fnic_fcpio_cmpl_handler(struct vnic_dev *vdev,
* fnic_wq_copy_cmpl_handler
* Routine to process wq copy
*/
-int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do)
+int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned int cq_index)
{
- unsigned int wq_work_done = 0;
- unsigned int i, cq_index;
unsigned int cur_work_done;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
u64 start_jiffies = 0;
@@ -1314,28 +1312,20 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do)
u64 delta_jiffies = 0;
u64 delta_ms = 0;

- for (i = 0; i < fnic->wq_copy_count; i++) {
- cq_index = i + fnic->raw_wq_count + fnic->rq_count;
-
- start_jiffies = jiffies;
- cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index],
- fnic_fcpio_cmpl_handler,
- copy_work_to_do);
- end_jiffies = jiffies;
-
- wq_work_done += cur_work_done;
- delta_jiffies = end_jiffies - start_jiffies;
- if (delta_jiffies >
- (u64) atomic64_read(&misc_stats->max_isr_jiffies)) {
- atomic64_set(&misc_stats->max_isr_jiffies,
- delta_jiffies);
- delta_ms = jiffies_to_msecs(delta_jiffies);
- atomic64_set(&misc_stats->max_isr_time_ms, delta_ms);
- atomic64_set(&misc_stats->corr_work_done,
- cur_work_done);
- }
+ start_jiffies = jiffies;
+ cur_work_done = vnic_cq_copy_service(&fnic->cq[cq_index],
+ fnic_fcpio_cmpl_handler,
+ copy_work_to_do);
+ end_jiffies = jiffies;
+ delta_jiffies = end_jiffies - start_jiffies;
+ if (delta_jiffies > (u64) atomic64_read(&misc_stats->max_isr_jiffies)) {
+ atomic64_set(&misc_stats->max_isr_jiffies, delta_jiffies);
+ delta_ms = jiffies_to_msecs(delta_jiffies);
+ atomic64_set(&misc_stats->max_isr_time_ms, delta_ms);
+ atomic64_set(&misc_stats->corr_work_done, cur_work_done);
}
- return wq_work_done;
+
+ return cur_work_done;
}

static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data)
diff --git a/drivers/scsi/fnic/fnic_stats.h b/drivers/scsi/fnic/fnic_stats.h
index bdf639eef8cf..07d1556e3c32 100644
--- a/drivers/scsi/fnic/fnic_stats.h
+++ b/drivers/scsi/fnic/fnic_stats.h
@@ -103,6 +103,7 @@ struct misc_stats {
atomic64_t rport_not_ready;
atomic64_t frame_errors;
atomic64_t current_port_speed;
+ atomic64_t intx_dummy;
};

struct fnic_stats {
--
2.31.1

Subject: [PATCH 10/13] scsi: fnic: Add support for multiqueue (MQ) in fnic_main.c

Set map_queues in the fnic_host_template to fnic_mq_map_queues_cpus.
Define fnic_mq_map_queues_cpus to set cpu assignment to
fnic queues.
Refactor code in fnic_probe to enable vnic queues before scsi_add_host.
Modify notify set to the correct index.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic.h | 2 +-
drivers/scsi/fnic/fnic_main.c | 137 ++++++++++++++++++++++++++--------
2 files changed, 105 insertions(+), 34 deletions(-)

diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
index 692175cfa557..182194b6d51d 100644
--- a/drivers/scsi/fnic/fnic.h
+++ b/drivers/scsi/fnic/fnic.h
@@ -377,7 +377,7 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,
int fnic_fw_reset_handler(struct fnic *fnic);
void fnic_terminate_rport_io(struct fc_rport *);
const char *fnic_state_to_str(unsigned int state);
-
+void fnic_mq_map_queues_cpus(struct Scsi_Host *host);
void fnic_log_q_error(struct fnic *fnic);
void fnic_handle_link_event(struct fnic *fnic);

diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
index 66e85754c1d6..5b60396e7349 100644
--- a/drivers/scsi/fnic/fnic_main.c
+++ b/drivers/scsi/fnic/fnic_main.c
@@ -12,6 +12,7 @@
#include <linux/pci.h>
#include <linux/skbuff.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/if_ether.h>
@@ -116,6 +117,7 @@ static const struct scsi_host_template fnic_host_template = {
.shost_groups = fnic_host_groups,
.track_queue_depth = 1,
.cmd_size = sizeof(struct fnic_cmd_priv),
+ .map_queues = fnic_mq_map_queues_cpus,
};

static void
@@ -392,7 +394,7 @@ static int fnic_notify_set(struct fnic *fnic)
err = vnic_dev_notify_set(fnic->vdev, -1);
break;
case VNIC_DEV_INTR_MODE_MSIX:
- err = vnic_dev_notify_set(fnic->vdev, FNIC_MSIX_ERR_NOTIFY);
+ err = vnic_dev_notify_set(fnic->vdev, fnic->wq_copy_count + fnic->cpy_wq_base);
break;
default:
shost_printk(KERN_ERR, fnic->lport->host,
@@ -565,11 +567,6 @@ static int fnic_scsi_drv_init(struct fnic *fnic)
host->max_cmd_len = FCOE_MAX_CMD_LEN;

host->nr_hw_queues = fnic->wq_copy_count;
- if (host->nr_hw_queues > 1)
- shost_printk(KERN_ERR, host,
- "fnic: blk-mq is not supported");
-
- host->nr_hw_queues = fnic->wq_copy_count = 1;

shost_printk(KERN_INFO, host,
"fnic: can_queue: %d max_lun: %llu",
@@ -582,15 +579,71 @@ static int fnic_scsi_drv_init(struct fnic *fnic)
return 0;
}

+void fnic_mq_map_queues_cpus(struct Scsi_Host *host)
+{
+ const struct cpumask *mask;
+ unsigned int queue, cpu;
+ int irq_num;
+ struct fc_lport *lp = shost_priv(host);
+ struct fnic *fnic = lport_priv(lp);
+ struct pci_dev *l_pdev = fnic->pdev;
+ struct blk_mq_tag_set *set = &host->tag_set;
+ int intr_mode = fnic->config.intr_mode;
+
+ if (intr_mode == VNIC_DEV_INTR_MODE_MSI || intr_mode == VNIC_DEV_INTR_MODE_INTX) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: intr_mode is not msix\n",
+ fnic->fnic_num, __func__, __LINE__);
+ return;
+ }
+
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: set->nr_hw_queues: %d\n",
+ fnic->fnic_num, __func__, __LINE__, set->nr_hw_queues);
+
+ for (queue = 0; queue < set->nr_hw_queues; queue++) {
+ if (l_pdev == NULL) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: l_pdev is null\n",
+ fnic->fnic_num, __func__, __LINE__);
+ return;
+ }
+
+ irq_num = pci_irq_vector(l_pdev, queue+2);
+ if (irq_num < 0) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: irq_num less than zero: %d\n",
+ fnic->fnic_num, __func__, __LINE__, irq_num);
+ continue;
+ }
+
+ mask = irq_get_effective_affinity_mask(irq_num);
+ if (!mask) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: failed to get irq_affinity map for queue: %d\n",
+ fnic->fnic_num, __func__, __LINE__, irq_num);
+ continue;
+ }
+
+ for_each_cpu(cpu, mask) {
+ set->map[HCTX_TYPE_DEFAULT].mq_map[cpu] = queue;
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: cpu: %d <=> queue: %d\n",
+ fnic->fnic_num, __func__, __LINE__, cpu, irq_num);
+ }
+ }
+}
+
static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct Scsi_Host *host;
struct fc_lport *lp;
struct fnic *fnic;
mempool_t *pool;
- int err;
+ int err = 0;
int i;
unsigned long flags;
+ int hwq;

atomic_inc(&fnic_num);
/*
@@ -607,8 +660,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
fnic = lport_priv(lp);
fnic->lport = lp;
fnic->ctlr.lp = lp;
-
fnic->link_events = 0;
+ fnic->pdev = pdev;

snprintf(fnic->name, sizeof(fnic->name) - 1, "%s%d", DRV_NAME,
host->host_no);
@@ -617,11 +670,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
fnic->fnic_num = atomic_read(&fnic_num);
fnic_stats_debugfs_init(fnic);

- /* Setup PCI resources */
- pci_set_drvdata(pdev, fnic);
-
- fnic->pdev = pdev;
-
err = pci_enable_device(pdev);
if (err) {
shost_printk(KERN_ERR, fnic->lport->host,
@@ -723,7 +771,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_dev_close;
}

- fnic_scsi_drv_init(fnic);
+ /* Setup PCI resources */
+ pci_set_drvdata(pdev, fnic);

fnic_get_res_counts(fnic);

@@ -743,6 +792,16 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_clear_intr;
}

+ fnic_scsi_drv_init(fnic);
+
+ for (hwq = 0; hwq < fnic->wq_copy_count; hwq++) {
+ fnic->sw_copy_wq[hwq].ioreq_table_size = fnic->fnic_max_tag_id;
+ fnic->sw_copy_wq[hwq].io_req_table =
+ kzalloc((fnic->sw_copy_wq[hwq].ioreq_table_size + 1) *
+ sizeof(struct fnic_io_req *), GFP_KERNEL);
+ }
+ shost_printk(KERN_INFO, fnic->lport->host, "fnic copy wqs: %d, Q0 ioreq table size: %d\n",
+ fnic->wq_copy_count, fnic->sw_copy_wq[0].ioreq_table_size);

/* initialize all fnic locks */
spin_lock_init(&fnic->fnic_lock);
@@ -827,16 +886,32 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)

/* allocate RQ buffers and post them to RQ*/
for (i = 0; i < fnic->rq_count; i++) {
- vnic_rq_enable(&fnic->rq[i]);
err = vnic_rq_fill(&fnic->rq[i], fnic_alloc_rq_frame);
if (err) {
shost_printk(KERN_ERR, fnic->lport->host,
"fnic_alloc_rq_frame can't alloc "
"frame\n");
- goto err_out_free_rq_buf;
+ goto err_out_rq_buf;
}
}

+ /* Enable all queues */
+ for (i = 0; i < fnic->raw_wq_count; i++)
+ vnic_wq_enable(&fnic->wq[i]);
+ for (i = 0; i < fnic->rq_count; i++) {
+ if (!ioread32(&fnic->rq[i].ctrl->enable))
+ vnic_rq_enable(&fnic->rq[i]);
+ }
+ for (i = 0; i < fnic->wq_copy_count; i++)
+ vnic_wq_copy_enable(&fnic->hw_copy_wq[i]);
+
+ err = fnic_request_intr(fnic);
+ if (err) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "Unable to request irq.\n");
+ goto err_out_request_intr;
+ }
+
/*
* Initialization done with PCI system, hardware, firmware.
* Add host to SCSI
@@ -845,9 +920,10 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err) {
shost_printk(KERN_ERR, fnic->lport->host,
"fnic: scsi_add_host failed...exiting\n");
- goto err_out_free_rq_buf;
+ goto err_out_scsi_add_host;
}

+
/* Start local port initiatialization */

lp->link_up = 0;
@@ -871,7 +947,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (!fc_exch_mgr_alloc(lp, FC_CLASS_3, FCPIO_HOST_EXCH_RANGE_START,
FCPIO_HOST_EXCH_RANGE_END, NULL)) {
err = -ENOMEM;
- goto err_out_remove_scsi_host;
+ goto err_out_fc_exch_mgr_alloc;
}

fc_lport_init_stats(lp);
@@ -899,21 +975,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
skb_queue_head_init(&fnic->frame_queue);
skb_queue_head_init(&fnic->tx_queue);

- /* Enable all queues */
- for (i = 0; i < fnic->raw_wq_count; i++)
- vnic_wq_enable(&fnic->wq[i]);
- for (i = 0; i < fnic->wq_copy_count; i++)
- vnic_wq_copy_enable(&fnic->hw_copy_wq[i]);
-
fc_fabric_login(lp);

- err = fnic_request_intr(fnic);
- if (err) {
- shost_printk(KERN_ERR, fnic->lport->host,
- "Unable to request irq.\n");
- goto err_out_free_exch_mgr;
- }
-
vnic_dev_enable(fnic->vdev);

for (i = 0; i < fnic->intr_count; i++)
@@ -925,12 +988,15 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)

err_out_free_exch_mgr:
fc_exch_mgr_free(lp);
-err_out_remove_scsi_host:
+err_out_fc_exch_mgr_alloc:
fc_remove_host(lp->host);
scsi_remove_host(lp->host);
-err_out_free_rq_buf:
+err_out_scsi_add_host:
+ fnic_free_intr(fnic);
+err_out_request_intr:
for (i = 0; i < fnic->rq_count; i++)
vnic_rq_clean(&fnic->rq[i], fnic_free_rq_buf);
+err_out_rq_buf:
vnic_dev_notify_unset(fnic->vdev);
err_out_free_max_pool:
mempool_destroy(fnic->io_sgl_pool[FNIC_SGL_CACHE_MAX]);
@@ -939,6 +1005,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
err_out_free_ioreq_pool:
mempool_destroy(fnic->io_req_pool);
err_out_free_resources:
+ for (hwq = 0; hwq < fnic->wq_copy_count; hwq++)
+ kfree(fnic->sw_copy_wq[hwq].io_req_table);
fnic_free_vnic_resources(fnic);
err_out_clear_intr:
fnic_clear_intr_mode(fnic);
@@ -965,6 +1033,7 @@ static void fnic_remove(struct pci_dev *pdev)
struct fnic *fnic = pci_get_drvdata(pdev);
struct fc_lport *lp = fnic->lport;
unsigned long flags;
+ int hwq;

/*
* Mark state so that the workqueue thread stops forwarding
@@ -1025,6 +1094,8 @@ static void fnic_remove(struct pci_dev *pdev)

fc_remove_host(fnic->lport->host);
scsi_remove_host(fnic->lport->host);
+ for (hwq = 0; hwq < fnic->wq_copy_count; hwq++)
+ kfree(fnic->sw_copy_wq[hwq].io_req_table);
fc_exch_mgr_free(fnic->lport);
vnic_dev_notify_unset(fnic->vdev);
fnic_free_intr(fnic);
--
2.31.1

Subject: [PATCH 11/13] scsi: fnic: Use fnic_lock to protect fnic structures in queuecommand

fnic does not use host_lock. fnic uses fnic_lock.
Use fnic lock to protect fnic members in fnic_queuecommand.
Add log messages in error cases.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic_scsi.c | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index 9a1beb3e7269..ffdbdbfcdf57 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -424,14 +424,27 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
int io_lock_acquired = 0;
struct fc_rport_libfc_priv *rp;

- if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED)))
+ spin_lock_irqsave(&fnic->fnic_lock, flags);
+
+ if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED))) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: fnic IO blocked flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n",
+ fnic->fnic_num, __func__, __LINE__, fnic->state_flags);
return SCSI_MLQUEUE_HOST_BUSY;
+ }

- if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET)))
+ if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_FWRESET))) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: fnic flags: 0x%lx. Returning SCSI_MLQUEUE_HOST_BUSY\n",
+ fnic->fnic_num, __func__, __LINE__, fnic->state_flags);
return SCSI_MLQUEUE_HOST_BUSY;
+ }

rport = starget_to_rport(scsi_target(sc->device));
if (!rport) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"returning DID_NO_CONNECT for IO as rport is NULL\n");
sc->result = DID_NO_CONNECT << 16;
@@ -441,6 +454,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)

ret = fc_remote_port_chkready(rport);
if (ret) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"rport is not ready\n");
atomic64_inc(&fnic_stats->misc_stats.rport_not_ready);
@@ -451,6 +465,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)

rp = rport->dd_data;
if (!rp || rp->rp_state == RPORT_ST_DELETE) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"rport 0x%x removed, returning DID_NO_CONNECT\n",
rport->port_id);
@@ -462,6 +477,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
}

if (rp->rp_state != RPORT_ST_READY) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"rport 0x%x in state 0x%x, returning DID_IMM_RETRY\n",
rport->port_id, rp->rp_state);
@@ -471,11 +487,17 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
return 0;
}

- if (lp->state != LPORT_ST_READY || !(lp->link_up))
+ if (lp->state != LPORT_ST_READY || !(lp->link_up)) {
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: state not ready: %d/link not up: %d Returning HOST_BUSY\n",
+ fnic->fnic_num, __func__, __LINE__, lp->state, lp->link_up);
return SCSI_MLQUEUE_HOST_BUSY;
+ }

atomic_inc(&fnic->in_flight);

+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
fnic_priv(sc)->state = FNIC_IOREQ_NOT_INITED;
fnic_priv(sc)->flags = FNIC_NO_FLAGS;

--
2.31.1

Subject: [PATCH 13/13] scsi: fnic: Improve logs and add support for multiqueue (MQ)

Improve existing logs by adding fnic number, hardware queue,
tag, and mqtag in the prints.
Add logs with the above elements for effective debugging.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Tested-by: Karan Tilak Kumar <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic.h | 2 +-
drivers/scsi/fnic/fnic_scsi.c | 127 ++++++++++++++++++++--------------
2 files changed, 77 insertions(+), 52 deletions(-)

diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
index bfc333366d1f..35c570d595e9 100644
--- a/drivers/scsi/fnic/fnic.h
+++ b/drivers/scsi/fnic/fnic.h
@@ -27,7 +27,7 @@

#define DRV_NAME "fnic"
#define DRV_DESCRIPTION "Cisco FCoE HBA Driver"
-#define DRV_VERSION "1.6.0.56"
+#define DRV_VERSION "1.6.0.58"
#define PFX DRV_NAME ": "
#define DFX DRV_NAME "%d: "

diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index 7ca34a76f84e..f5b7a7de83d7 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -211,12 +211,14 @@ int fnic_fw_reset_handler(struct fnic *fnic)

if (!ret) {
atomic64_inc(&fnic->fnic_stats.reset_stats.fw_resets);
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Issued fw reset\n");
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: Issued fw reset\n",
+ fnic->fnic_num, __func__, __LINE__);
} else {
fnic_clear_state_flags(fnic, FNIC_FLAGS_FWRESET);
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Failed to issue fw reset\n");
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: Failed to issue fw reset\n",
+ fnic->fnic_num, __func__, __LINE__);
}

return ret;
@@ -265,9 +267,9 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id)
} else {
fnic_queue_wq_copy_desc_flogi_reg(wq, SCSI_NO_TAG,
format, fc_id, gw_mac);
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "FLOGI reg issued fcid %x map %d dest %pM\n",
- fc_id, fnic->ctlr.map_dest, gw_mac);
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: FLOGI reg issued fcid 0x%x map %d dest 0x%p\n",
+ fnic->fnic_num, __func__, __LINE__, fc_id, fnic->ctlr.map_dest, gw_mac);
}

atomic64_inc(&fnic->fnic_stats.fw_stats.active_fw_reqs);
@@ -662,15 +664,16 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic,
if (fnic->state == FNIC_IN_FC_TRANS_ETH_MODE) {
/* Check status of reset completion */
if (!hdr_status) {
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "reset cmpl success\n");
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: reset cmpl success\n",
+ fnic->fnic_num, __func__, __LINE__);
/* Ready to send flogi out */
fnic->state = FNIC_IN_ETH_MODE;
} else {
- FNIC_SCSI_DBG(KERN_DEBUG,
- fnic->lport->host,
- "fnic fw_reset : failed %s\n",
- fnic_fcpio_status_to_str(hdr_status));
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: reset failed with header status: %s\n",
+ fnic->fnic_num, __func__, __LINE__,
+ fnic_fcpio_status_to_str(hdr_status));

/*
* Unable to change to eth mode, cannot send out flogi
@@ -683,10 +686,9 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic,
ret = -1;
}
} else {
- FNIC_SCSI_DBG(KERN_DEBUG,
- fnic->lport->host,
- "Unexpected state %s while processing"
- " reset cmpl\n", fnic_state_to_str(fnic->state));
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: Unexpected state while processing reset completion: %s\n",
+ fnic->fnic_num, __func__, __LINE__, fnic_state_to_str(fnic->state));
atomic64_inc(&reset_stats->fw_reset_failures);
ret = -1;
}
@@ -1195,9 +1197,10 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde
if ((id & FNIC_TAG_ABORT) && (id & FNIC_TAG_DEV_RST)) {
/* Abort and terminate completion of device reset req */
/* REVISIT : Add asserts about various flags */
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "dev reset abts cmpl recd. id %x status %s\n",
- id, fnic_fcpio_status_to_str(hdr_status));
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s Abt/term completion received\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag,
+ fnic_fcpio_status_to_str(hdr_status));
fnic_priv(sc)->state = FNIC_IOREQ_ABTS_COMPLETE;
fnic_priv(sc)->abts_status = hdr_status;
fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE;
@@ -1206,6 +1209,10 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
} else if (id & FNIC_TAG_ABORT) {
/* Completion of abort cmd */
+ shost_printk(KERN_DEBUG, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x Abort header status: %s\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag,
+ fnic_fcpio_status_to_str(hdr_status));
switch (hdr_status) {
case FCPIO_SUCCESS:
break;
@@ -1265,9 +1272,14 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde
if (io_req->abts_done) {
complete(io_req->abts_done);
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x Waking up abort thread\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag);
} else {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "abts cmpl, completing IO\n");
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s Completing IO\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag,
+ tag, fnic_fcpio_status_to_str(hdr_status));
fnic_priv(sc)->io_req = NULL;
sc->result = (DID_ERROR << 16);
fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL;
@@ -1295,6 +1307,10 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde
}
} else if (id & FNIC_TAG_DEV_RST) {
/* Completion of device reset */
+ shost_printk(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x DR hst: %s\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag,
+ tag, fnic_fcpio_status_to_str(hdr_status));
fnic_priv(sc)->lr_status = hdr_status;
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) {
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
@@ -1304,10 +1320,9 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde
jiffies_to_msecs(jiffies - start_time),
desc, 0, fnic_flags_and_state(sc));
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Terminate pending "
- "dev reset cmpl recd. id %d status %s\n",
- (int)(id & FNIC_TAG_MASK),
- fnic_fcpio_status_to_str(hdr_status));
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s Terminate pending\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag,
+ tag, fnic_fcpio_status_to_str(hdr_status));
return;
}
if (fnic_priv(sc)->flags & FNIC_DEV_RST_TIMED_OUT) {
@@ -1326,18 +1341,18 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_inde
}
fnic_priv(sc)->state = FNIC_IOREQ_CMD_COMPLETE;
fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE;
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "dev reset cmpl recd. id %d status %s\n",
- (int)(id & FNIC_TAG_MASK),
- fnic_fcpio_status_to_str(hdr_status));
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x hst: %s DR completion received\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag,
+ tag, fnic_fcpio_status_to_str(hdr_status));
if (io_req->dr_done)
complete(io_req->dr_done);
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

} else {
shost_printk(KERN_ERR, fnic->lport->host,
- "Unexpected itmf io state %s tag %x\n",
- fnic_ioreq_state_to_str(fnic_priv(sc)->state), id);
+ "%s: Unexpected itmf io state: hwq: %d tag 0x%x %s\n",
+ __func__, hwq, id, fnic_ioreq_state_to_str(fnic_priv(sc)->state));
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
}

@@ -1488,9 +1503,9 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data)
mempool_free(io_req, fnic->io_req_pool);

sc->result = DID_TRANSPORT_DISRUPTED << 16;
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "fnic_cleanup_io: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n",
- tag, sc, jiffies - start_time);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: mqtag:0x%x tag: 0x%x sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n",
+ fnic->fnic_num, __func__, __LINE__, mqtag, tag, sc, (jiffies - start_time));

if (atomic64_read(&fnic->io_cmpl_skip))
atomic64_dec(&fnic->io_cmpl_skip);
@@ -1659,9 +1674,9 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data)

if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) &&
!(fnic_priv(sc)->flags & FNIC_DEV_RST_ISSUED)) {
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "fnic_rport_exch_reset dev rst not pending sc 0x%p\n",
- sc);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d abt_tag: 0x%x flags: 0x%x Device reset is not pending\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, abt_tag, fnic_priv(sc)->flags);
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}
@@ -1717,6 +1732,9 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data)
* lun reset
*/
spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d abt_tag: 0x%x flags: 0x%x Queuing abort failed\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, abt_tag, fnic_priv(sc)->flags);
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING)
fnic_priv(sc)->state = old_ioreq_state;
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
@@ -1887,8 +1905,9 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
else
atomic64_inc(&abts_stats->abort_issued_greater_than_60_sec);

- FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
- "CBD Opcode: %02x Abort issued time: %lu msec\n", sc->cmnd[0], abt_issued_time);
+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+ "fnic<%d>: %s: CDB Opcode: 0x%02x Abort issued time: %lu msec\n",
+ fnic->fnic_num, __func__, sc->cmnd[0], abt_issued_time);
/*
* Command is still pending, need to abort it
* If the firmware completes the command after this point,
@@ -1977,8 +1996,9 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)

if (!(fnic_priv(sc)->flags & (FNIC_IO_ABORTED | FNIC_IO_DONE))) {
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Issuing Host reset due to out of order IO\n");
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: Issuing host reset due to out of order IO\n",
+ fnic->fnic_num, __func__);

ret = FAILED;
goto fnic_abort_cmd_end;
@@ -2185,6 +2205,9 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
fnic_priv(sc)->state = old_ioreq_state;
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
iter_data->ret = FAILED;
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d abt_tag: 0x%lx Abort could not be queued\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, abt_tag);
return false;
} else {
spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
@@ -2318,8 +2341,9 @@ int fnic_device_reset(struct scsi_cmnd *sc)

rport = starget_to_rport(scsi_target(sc->device));
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Device reset called FCID 0x%x, LUN 0x%llx sc 0x%p\n",
- rport->port_id, sc->device->lun, sc);
+ "fnic<%d>: %s: %d: fcid: 0x%x lun: 0x%llx hwq: %d mqtag: 0x%x flags: 0x%x Device reset\n",
+ fnic->fnic_num, __func__, __LINE__, rport->port_id, sc->device->lun, hwq, mqtag,
+ fnic_priv(sc)->flags);

if (lp->state != LPORT_ST_READY || !(lp->link_up))
goto fnic_device_reset_end;
@@ -2554,8 +2578,9 @@ int fnic_reset(struct Scsi_Host *shost)
fnic = lport_priv(lp);
reset_stats = &fnic->fnic_stats.reset_stats;

- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "fnic_reset called\n");
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: Issuing fnic reset\n",
+ fnic->fnic_num, __func__, __LINE__);

atomic64_inc(&reset_stats->fnic_resets);

@@ -2565,10 +2590,9 @@ int fnic_reset(struct Scsi_Host *shost)
*/
ret = fc_lport_reset(lp);

- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Returning from fnic reset %s\n",
- (ret == 0) ?
- "SUCCESS" : "FAILED");
+ FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+ "fnic<%d>: %s: %d: Returning from fnic reset with: %s\n",
+ fnic->fnic_num, __func__, __LINE__, (ret == 0) ? "SUCCESS" : "FAILED");

if (ret == 0)
atomic64_inc(&reset_stats->fnic_reset_completions);
@@ -2784,8 +2808,9 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data)
* belongs to the LUN that we are resetting
*/
FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
- "Found IO in %s on lun\n",
- fnic_ioreq_state_to_str(fnic_priv(sc)->state));
+ "fnic<%d>: %s: %d: hwq: %d tag: 0x%x Found IO in state: %s on lun\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, tag,
+ fnic_ioreq_state_to_str(fnic_priv(sc)->state));
cmd_state = fnic_priv(sc)->state;
spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
if (cmd_state == FNIC_IOREQ_ABTS_PENDING)
--
2.31.1

Subject: [PATCH 12/13] scsi: fnic: Add support for multiqueue (MQ) in fnic driver

Implement support for MQ in fnic driver:

The block multiqueue layer issues IO to the fnic driver
with an MQ tag. Use the mqtag and derive a tag from it.
Derive the hardware queue from the mqtag and use it
in all paths. Modify queuecommand to handle mqtag.

Replace wq and cq indices to support MQ. Replace the
zeroth queue with a hardware queue.
Implement spin locks on a per hardware queue basis.
Replace io_lock with per hardware queue spinlock.
Implement out of range tag checks.

Allocate an io_req_table to track status of the io_req.

Test the driver by building it, loading it,
and configuring 64 queues in UCSM. Issue IOs using
Medusa on multiple fnics. Enable/disable links to exercise
the abort and clean up path.

Reviewed-by: Sesidhar Baddela <[email protected]>
Reviewed-by: Arulprabhu Ponnusamy <[email protected]>
Tested-by: Karan Tilak Kumar <[email protected]>
Signed-off-by: Karan Tilak Kumar <[email protected]>
---
drivers/scsi/fnic/fnic.h | 2 -
drivers/scsi/fnic/fnic_io.h | 2 +
drivers/scsi/fnic/fnic_main.c | 3 -
drivers/scsi/fnic/fnic_scsi.c | 579 +++++++++++++++++++++-------------
4 files changed, 359 insertions(+), 227 deletions(-)

diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h
index 182194b6d51d..bfc333366d1f 100644
--- a/drivers/scsi/fnic/fnic.h
+++ b/drivers/scsi/fnic/fnic.h
@@ -36,7 +36,6 @@
#define FNIC_MIN_IO_REQ 256 /* Min IO throttle count */
#define FNIC_MAX_IO_REQ 1024 /* scsi_cmnd tag map entries */
#define FNIC_DFLT_IO_REQ 256 /* Default scsi_cmnd tag map entries */
-#define FNIC_IO_LOCKS 64 /* IO locks: power of 2 */
#define FNIC_DFLT_QUEUE_DEPTH 256
#define FNIC_STATS_RATE_LIMIT 4 /* limit rate at which stats are pulled up */

@@ -292,7 +291,6 @@ struct fnic {
struct fnic_host_tag *tags;
mempool_t *io_req_pool;
mempool_t *io_sgl_pool[FNIC_SGL_NUM_CACHES];
- spinlock_t io_req_lock[FNIC_IO_LOCKS]; /* locks for scsi cmnds */

unsigned int cpy_wq_base;
struct work_struct link_work;
diff --git a/drivers/scsi/fnic/fnic_io.h b/drivers/scsi/fnic/fnic_io.h
index f4c8769df312..5895ead20e14 100644
--- a/drivers/scsi/fnic/fnic_io.h
+++ b/drivers/scsi/fnic/fnic_io.h
@@ -52,6 +52,8 @@ struct fnic_io_req {
unsigned long start_time; /* in jiffies */
struct completion *abts_done; /* completion for abts */
struct completion *dr_done; /* completion for device reset */
+ unsigned int tag;
+ struct scsi_cmnd *sc; /* midlayer's cmd pointer */
};

enum fnic_port_speeds {
diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
index 5b60396e7349..9ab115a5fc74 100644
--- a/drivers/scsi/fnic/fnic_main.c
+++ b/drivers/scsi/fnic/fnic_main.c
@@ -816,9 +816,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
fnic->fw_ack_index[i] = -1;
}

- for (i = 0; i < FNIC_IO_LOCKS; i++)
- spin_lock_init(&fnic->io_req_lock[i]);
-
err = -ENOMEM;
fnic->io_req_pool = mempool_create_slab_pool(2, fnic_io_req_cache);
if (!fnic->io_req_pool)
diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index ffdbdbfcdf57..7ca34a76f84e 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -92,20 +92,6 @@ static const char *fnic_fcpio_status_to_str(unsigned int status)

static void fnic_cleanup_io(struct fnic *fnic);

-static inline spinlock_t *fnic_io_lock_hash(struct fnic *fnic,
- struct scsi_cmnd *sc)
-{
- u32 hash = scsi_cmd_to_rq(sc)->tag & (FNIC_IO_LOCKS - 1);
-
- return &fnic->io_req_lock[hash];
-}
-
-static inline spinlock_t *fnic_io_lock_tag(struct fnic *fnic,
- int tag)
-{
- return &fnic->io_req_lock[tag & (FNIC_IO_LOCKS - 1)];
-}
-
/*
* Unmap the data buffer and sense buffer for an io_req,
* also unmap and free the device-private scatter/gather list.
@@ -129,23 +115,23 @@ static void fnic_release_ioreq_buf(struct fnic *fnic,
}

/* Free up Copy Wq descriptors. Called with copy_wq lock held */
-static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq)
+static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq, unsigned int hwq)
{
/* if no Ack received from firmware, then nothing to clean */
- if (!fnic->fw_ack_recd[0])
+ if (!fnic->fw_ack_recd[hwq])
return 1;

/*
* Update desc_available count based on number of freed descriptors
* Account for wraparound
*/
- if (wq->to_clean_index <= fnic->fw_ack_index[0])
- wq->ring.desc_avail += (fnic->fw_ack_index[0]
+ if (wq->to_clean_index <= fnic->fw_ack_index[hwq])
+ wq->ring.desc_avail += (fnic->fw_ack_index[hwq]
- wq->to_clean_index + 1);
else
wq->ring.desc_avail += (wq->ring.desc_count
- wq->to_clean_index
- + fnic->fw_ack_index[0] + 1);
+ + fnic->fw_ack_index[hwq] + 1);

/*
* just bump clean index to ack_index+1 accounting for wraparound
@@ -153,10 +139,10 @@ static int free_wq_copy_descs(struct fnic *fnic, struct vnic_wq_copy *wq)
* to_clean_index and fw_ack_index, both inclusive
*/
wq->to_clean_index =
- (fnic->fw_ack_index[0] + 1) % wq->ring.desc_count;
+ (fnic->fw_ack_index[hwq] + 1) % wq->ring.desc_count;

/* we have processed the acks received so far */
- fnic->fw_ack_recd[0] = 0;
+ fnic->fw_ack_recd[hwq] = 0;
return 0;
}

@@ -207,7 +193,7 @@ int fnic_fw_reset_handler(struct fnic *fnic)
spin_lock_irqsave(&fnic->wq_copy_lock[0], flags);

if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0])
- free_wq_copy_descs(fnic, wq);
+ free_wq_copy_descs(fnic, wq, 0);

if (!vnic_wq_copy_desc_avail(wq))
ret = -EAGAIN;
@@ -253,7 +239,7 @@ int fnic_flogi_reg_handler(struct fnic *fnic, u32 fc_id)
spin_lock_irqsave(&fnic->wq_copy_lock[0], flags);

if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0])
- free_wq_copy_descs(fnic, wq);
+ free_wq_copy_descs(fnic, wq, 0);

if (!vnic_wq_copy_desc_avail(wq)) {
ret = -EAGAIN;
@@ -303,7 +289,9 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
struct vnic_wq_copy *wq,
struct fnic_io_req *io_req,
struct scsi_cmnd *sc,
- int sg_count)
+ int sg_count,
+ uint32_t mqtag,
+ uint16_t hwq)
{
struct scatterlist *sg;
struct fc_rport *rport = starget_to_rport(scsi_target(sc->device));
@@ -311,7 +299,6 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
struct host_sg_desc *desc;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
unsigned int i;
- unsigned long intr_flags;
int flags;
u8 exch_flags;
struct scsi_lun fc_lun;
@@ -351,13 +338,10 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
int_to_scsilun(sc->device->lun, &fc_lun);

/* Enqueue the descriptor in the Copy WQ */
- spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags);
-
- if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0])
- free_wq_copy_descs(fnic, wq);
+ if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq])
+ free_wq_copy_descs(fnic, wq, hwq);

if (unlikely(!vnic_wq_copy_desc_avail(wq))) {
- spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags);
FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
"fnic_queue_wq_copy_desc failure - no descriptors\n");
atomic64_inc(&misc_stats->io_cpwq_alloc_failures);
@@ -375,7 +359,7 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
(rp->flags & FC_RP_FLAGS_RETRY))
exch_flags |= FCPIO_ICMND_SRFLAG_RETRY;

- fnic_queue_wq_copy_desc_icmnd_16(wq, scsi_cmd_to_rq(sc)->tag,
+ fnic_queue_wq_copy_desc_icmnd_16(wq, mqtag,
0, exch_flags, io_req->sgl_cnt,
SCSI_SENSE_BUFFERSIZE,
io_req->sgl_list_pa,
@@ -396,7 +380,6 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
atomic64_set(&fnic->fnic_stats.fw_stats.max_fw_reqs,
atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs));

- spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags);
return 0;
}

@@ -405,22 +388,20 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
* Routine to send a scsi cdb
* Called with host_lock held and interrupts disabled.
*/
-static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
+static int fnic_queuecommand_lck(struct scsi_cmnd *sc, uint32_t mqtag, uint16_t hwq)
{
void (*done)(struct scsi_cmnd *) = scsi_done;
- const int tag = scsi_cmd_to_rq(sc)->tag;
struct fc_lport *lp = shost_priv(sc->device->host);
struct fc_rport *rport;
struct fnic_io_req *io_req = NULL;
struct fnic *fnic = lport_priv(lp);
struct fnic_stats *fnic_stats = &fnic->fnic_stats;
struct vnic_wq_copy *wq;
- int ret;
+ int ret = 1;
u64 cmd_trace;
int sg_count = 0;
unsigned long flags = 0;
unsigned long ptr;
- spinlock_t *io_lock = NULL;
int io_lock_acquired = 0;
struct fc_rport_libfc_priv *rp;

@@ -514,7 +495,7 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
sg_count = scsi_dma_map(sc);
if (sg_count < 0) {
FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no,
- tag, sc, 0, sc->cmnd[0], sg_count, fnic_priv(sc)->state);
+ mqtag, sc, 0, sc->cmnd[0], sg_count, fnic_priv(sc)->state);
mempool_free(io_req, fnic->io_req_pool);
goto out;
}
@@ -549,11 +530,9 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
}

/*
- * Will acquire lock defore setting to IO initialized.
+ * Will acquire lock before setting to IO initialized.
*/
-
- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

/* initialize rest of io_req */
io_lock_acquired = 1;
@@ -562,21 +541,34 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
fnic_priv(sc)->state = FNIC_IOREQ_CMD_PENDING;
fnic_priv(sc)->io_req = io_req;
fnic_priv(sc)->flags |= FNIC_IO_INITIALIZED;
+ io_req->sc = sc;
+
+ if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL) {
+ WARN(1, "fnic<%d>: %s: hwq: %d tag 0x%x already exists\n",
+ fnic->fnic_num, __func__, hwq, blk_mq_unique_tag_to_tag(mqtag));
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = io_req;
+ io_req->tag = mqtag;

/* create copy wq desc and enqueue it */
- wq = &fnic->hw_copy_wq[0];
- ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count);
+ wq = &fnic->hw_copy_wq[hwq];
+ atomic64_inc(&fnic_stats->io_stats.ios[hwq]);
+ ret = fnic_queue_wq_copy_desc(fnic, wq, io_req, sc, sg_count, mqtag, hwq);
if (ret) {
/*
* In case another thread cancelled the request,
* refetch the pointer under the lock.
*/
FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no,
- tag, sc, 0, 0, 0, fnic_flags_and_state(sc));
+ mqtag, sc, 0, 0, 0, fnic_flags_and_state(sc));
io_req = fnic_priv(sc)->io_req;
fnic_priv(sc)->io_req = NULL;
+ if (io_req)
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = NULL;
fnic_priv(sc)->state = FNIC_IOREQ_CMD_COMPLETE;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
if (io_req) {
fnic_release_ioreq_buf(fnic, io_req, sc);
mempool_free(io_req, fnic->io_req_pool);
@@ -601,18 +593,43 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
sc->cmnd[5]);

FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no,
- tag, sc, io_req, sg_count, cmd_trace,
+ mqtag, sc, io_req, sg_count, cmd_trace,
fnic_flags_and_state(sc));

/* if only we issued IO, will we have the io lock */
if (io_lock_acquired)
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

atomic_dec(&fnic->in_flight);
return ret;
}

-DEF_SCSI_QCMD(fnic_queuecommand)
+int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc)
+{
+ struct request *const rq = scsi_cmd_to_rq(sc);
+ int rc;
+ struct fc_lport *lp = shost_priv(sc->device->host);
+ struct fnic *fnic = lport_priv(lp);
+ uint32_t mqtag = 0;
+ int tag = 0;
+ uint16_t hwq = 0;
+
+ mqtag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(mqtag);
+ tag = blk_mq_unique_tag_to_tag(mqtag);
+
+ if (tag >= fnic->fnic_max_tag_id) {
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: Out of range tag: 0x%x\n",
+ fnic->fnic_num, __func__, tag);
+ sc->result = DID_ERROR << 16;
+ scsi_done(sc);
+ return 0;
+ }
+
+ rc = fnic_queuecommand_lck(sc, mqtag, hwq);
+ return rc;
+}

/*
* fnic_fcpio_fw_reset_cmpl_handler
@@ -789,20 +806,21 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic,
u16 request_out = desc->u.ack.request_out;
unsigned long flags;
u64 *ox_id_tag = (u64 *)(void *)desc;
+ unsigned int wq_index = cq_index;

/* mark the ack state */
- wq = &fnic->hw_copy_wq[cq_index - fnic->raw_wq_count - fnic->rq_count];
- spin_lock_irqsave(&fnic->wq_copy_lock[0], flags);
+ wq = &fnic->hw_copy_wq[cq_index];
+ spin_lock_irqsave(&fnic->wq_copy_lock[wq_index], flags);

fnic->fnic_stats.misc_stats.last_ack_time = jiffies;
if (is_ack_index_in_range(wq, request_out)) {
- fnic->fw_ack_index[0] = request_out;
- fnic->fw_ack_recd[0] = 1;
+ fnic->fw_ack_index[wq_index] = request_out;
+ fnic->fw_ack_recd[wq_index] = 1;
} else
atomic64_inc(
&fnic->fnic_stats.misc_stats.ack_index_out_of_range);

- spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[wq_index], flags);
FNIC_TRACE(fnic_fcpio_ack_handler,
fnic->lport->host->host_no, 0, 0, ox_id_tag[2], ox_id_tag[3],
ox_id_tag[4], ox_id_tag[5]);
@@ -812,12 +830,12 @@ static inline void fnic_fcpio_ack_handler(struct fnic *fnic,
* fnic_fcpio_icmnd_cmpl_handler
* Routine to handle icmnd completions
*/
-static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
+static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic, unsigned int cq_index,
struct fcpio_fw_req *desc)
{
u8 type;
u8 hdr_status;
- struct fcpio_tag tag;
+ struct fcpio_tag ftag;
u32 id;
u64 xfer_len = 0;
struct fcpio_icmnd_cmpl *icmnd_cmpl;
@@ -825,27 +843,49 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
struct scsi_cmnd *sc;
struct fnic_stats *fnic_stats = &fnic->fnic_stats;
unsigned long flags;
- spinlock_t *io_lock;
u64 cmd_trace;
unsigned long start_time;
unsigned long io_duration_time;
+ unsigned int hwq = 0;
+ unsigned int mqtag = 0;
+ unsigned int tag = 0;

/* Decode the cmpl description to get the io_req id */
- fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag);
- fcpio_tag_id_dec(&tag, &id);
+ fcpio_header_dec(&desc->hdr, &type, &hdr_status, &ftag);
+ fcpio_tag_id_dec(&ftag, &id);
icmnd_cmpl = &desc->u.icmnd_cmpl;

- if (id >= fnic->fnic_max_tag_id) {
+ mqtag = id;
+ tag = blk_mq_unique_tag_to_tag(mqtag);
+ hwq = blk_mq_unique_tag_to_hwq(mqtag);
+
+ if (hwq != cq_index) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, cq_index);
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hdr status: %s icmnd completion on the wrong queue\n",
+ fnic->fnic_num, __func__, __LINE__,
+ fnic_fcpio_status_to_str(hdr_status));
+ }
+
+ if (tag >= fnic->fnic_max_tag_id) {
shost_printk(KERN_ERR, fnic->lport->host,
- "Tag out of range tag %x hdr status = %s\n",
- id, fnic_fcpio_status_to_str(hdr_status));
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, cq_index);
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hdr status: %s Out of range tag\n",
+ fnic->fnic_num, __func__, __LINE__,
+ fnic_fcpio_status_to_str(hdr_status));
return;
}
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

sc = scsi_host_find_tag(fnic->lport->host, id);
WARN_ON_ONCE(!sc);
if (!sc) {
atomic64_inc(&fnic_stats->io_stats.sc_null);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
shost_printk(KERN_ERR, fnic->lport->host,
"icmnd_cmpl sc is null - "
"hdr status = %s tag = 0x%x desc = 0x%p\n",
@@ -861,14 +901,19 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
return;
}

- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
io_req = fnic_priv(sc)->io_req;
+ if (fnic->sw_copy_wq[hwq].io_req_table[tag] != io_req) {
+ WARN(1, "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x io_req tag mismatch\n",
+ __func__, __LINE__, hwq, mqtag, tag);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
+ return;
+ }
+
WARN_ON_ONCE(!io_req);
if (!io_req) {
atomic64_inc(&fnic_stats->io_stats.ioreq_null);
fnic_priv(sc)->flags |= FNIC_IO_REQ_NULL;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
shost_printk(KERN_ERR, fnic->lport->host,
"icmnd_cmpl io_req is null - "
"hdr status = %s tag = 0x%x sc 0x%p\n",
@@ -892,7 +937,7 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
*/
fnic_priv(sc)->flags |= FNIC_IO_DONE;
fnic_priv(sc)->flags |= FNIC_IO_ABTS_PENDING;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
if(FCPIO_ABORTED == hdr_status)
fnic_priv(sc)->flags |= FNIC_IO_ABORTED;

@@ -980,7 +1025,11 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,

/* Break link with the SCSI command */
fnic_priv(sc)->io_req = NULL;
+ io_req->sc = NULL;
fnic_priv(sc)->flags |= FNIC_IO_DONE;
+ fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL;
+
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

if (hdr_status != FCPIO_SUCCESS) {
atomic64_inc(&fnic_stats->io_stats.io_failures);
@@ -1014,7 +1063,6 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,

/* Call SCSI completion function to complete the IO */
scsi_done(sc);
- spin_unlock_irqrestore(io_lock, flags);

mempool_free(io_req, fnic->io_req_pool);

@@ -1051,12 +1099,12 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
/* fnic_fcpio_itmf_cmpl_handler
* Routine to handle itmf completions
*/
-static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
+static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic, unsigned int cq_index,
struct fcpio_fw_req *desc)
{
u8 type;
u8 hdr_status;
- struct fcpio_tag tag;
+ struct fcpio_tag ftag;
u32 id;
struct scsi_cmnd *sc;
struct fnic_io_req *io_req;
@@ -1065,35 +1113,76 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
unsigned long flags;
- spinlock_t *io_lock;
unsigned long start_time;
+ unsigned int hwq = cq_index;
+ unsigned int mqtag;
+ unsigned int tag;

- fcpio_header_dec(&desc->hdr, &type, &hdr_status, &tag);
- fcpio_tag_id_dec(&tag, &id);
+ fcpio_header_dec(&desc->hdr, &type, &hdr_status, &ftag);
+ fcpio_tag_id_dec(&ftag, &id);
+
+ mqtag = id & FNIC_TAG_MASK;
+ tag = blk_mq_unique_tag_to_tag(id & FNIC_TAG_MASK);
+ hwq = blk_mq_unique_tag_to_hwq(id & FNIC_TAG_MASK);
+
+ if (hwq != cq_index) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ",
+ __func__, __LINE__, hwq, mqtag, tag, cq_index);
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "%s: %d: hdr status: %s ITMF completion on the wrong queue\n",
+ __func__, __LINE__,
+ fnic_fcpio_status_to_str(hdr_status));
+ }

- if ((id & FNIC_TAG_MASK) >= fnic->fnic_max_tag_id) {
+ if (tag > fnic->fnic_max_tag_id) {
shost_printk(KERN_ERR, fnic->lport->host,
- "Tag out of range tag %x hdr status = %s\n",
- id, fnic_fcpio_status_to_str(hdr_status));
+ "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ",
+ __func__, __LINE__, hwq, mqtag, tag, cq_index);
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "%s: %d: hdr status: %s Tag out of range\n",
+ __func__, __LINE__,
+ fnic_fcpio_status_to_str(hdr_status));
+ return;
+ } else if ((tag == fnic->fnic_max_tag_id) && !(id & FNIC_TAG_DEV_RST)) {
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "%s: %d: hwq: %d mqtag: 0x%x tag: 0x%x cq index: %d ",
+ __func__, __LINE__, hwq, mqtag, tag, cq_index);
+ shost_printk(KERN_ERR, fnic->lport->host,
+ "%s: %d: hdr status: %s Tag out of range\n",
+ __func__, __LINE__,
+ fnic_fcpio_status_to_str(hdr_status));
return;
}

- sc = scsi_host_find_tag(fnic->lport->host, id & FNIC_TAG_MASK);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
+
+ /* If it is sg3utils allocated SC then tag_id
+ * is max_tag_id and SC is retrieved from io_req
+ */
+ if ((mqtag == fnic->fnic_max_tag_id) && (id & FNIC_TAG_DEV_RST)) {
+ io_req = fnic->sw_copy_wq[hwq].io_req_table[tag];
+ if (io_req)
+ sc = io_req->sc;
+ } else {
+ sc = scsi_host_find_tag(fnic->lport->host, id & FNIC_TAG_MASK);
+ }
+
WARN_ON_ONCE(!sc);
if (!sc) {
atomic64_inc(&fnic_stats->io_stats.sc_null);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
shost_printk(KERN_ERR, fnic->lport->host,
"itmf_cmpl sc is null - hdr status = %s tag = 0x%x\n",
fnic_fcpio_status_to_str(hdr_status), id);
return;
}
- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
+
io_req = fnic_priv(sc)->io_req;
WARN_ON_ONCE(!io_req);
if (!io_req) {
atomic64_inc(&fnic_stats->io_stats.ioreq_null);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL;
shost_printk(KERN_ERR, fnic->lport->host,
"itmf_cmpl io_req is null - "
@@ -1114,7 +1203,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
fnic_priv(sc)->flags |= FNIC_DEV_RST_DONE;
if (io_req->abts_done)
complete(io_req->abts_done);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
} else if (id & FNIC_TAG_ABORT) {
/* Completion of abort cmd */
switch (hdr_status) {
@@ -1149,7 +1238,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
}
if (fnic_priv(sc)->state != FNIC_IOREQ_ABTS_PENDING) {
/* This is a late completion. Ignore it */
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return;
}

@@ -1175,14 +1264,14 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
*/
if (io_req->abts_done) {
complete(io_req->abts_done);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
} else {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"abts cmpl, completing IO\n");
fnic_priv(sc)->io_req = NULL;
sc->result = (DID_ERROR << 16);
-
- spin_unlock_irqrestore(io_lock, flags);
+ fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL;
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

fnic_release_ioreq_buf(fnic, io_req, sc);
mempool_free(io_req, fnic->io_req_pool);
@@ -1208,7 +1297,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
/* Completion of device reset */
fnic_priv(sc)->lr_status = hdr_status;
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_DEV_RST_ABTS_PENDING;
FNIC_TRACE(fnic_fcpio_itmf_cmpl_handler,
sc->device->host->host_no, id, sc,
@@ -1223,7 +1312,7 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
}
if (fnic_priv(sc)->flags & FNIC_DEV_RST_TIMED_OUT) {
/* Need to wait for terminate completion */
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
FNIC_TRACE(fnic_fcpio_itmf_cmpl_handler,
sc->device->host->host_no, id, sc,
jiffies_to_msecs(jiffies - start_time),
@@ -1243,13 +1332,13 @@ static void fnic_fcpio_itmf_cmpl_handler(struct fnic *fnic,
fnic_fcpio_status_to_str(hdr_status));
if (io_req->dr_done)
complete(io_req->dr_done);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

} else {
shost_printk(KERN_ERR, fnic->lport->host,
"Unexpected itmf io state %s tag %x\n",
fnic_ioreq_state_to_str(fnic_priv(sc)->state), id);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
}

}
@@ -1276,17 +1365,19 @@ static int fnic_fcpio_cmpl_handler(struct vnic_dev *vdev,
break;
}

+ cq_index -= fnic->cpy_wq_base;
+
switch (desc->hdr.type) {
case FCPIO_ACK: /* fw copied copy wq desc to its queue */
fnic_fcpio_ack_handler(fnic, cq_index, desc);
break;

case FCPIO_ICMND_CMPL: /* fw completed a command */
- fnic_fcpio_icmnd_cmpl_handler(fnic, desc);
+ fnic_fcpio_icmnd_cmpl_handler(fnic, cq_index, desc);
break;

case FCPIO_ITMF_CMPL: /* fw completed itmf (abort cmd, lun reset)*/
- fnic_fcpio_itmf_cmpl_handler(fnic, desc);
+ fnic_fcpio_itmf_cmpl_handler(fnic, cq_index, desc);
break;

case FCPIO_FLOGI_REG_CMPL: /* fw completed flogi_reg */
@@ -1339,18 +1430,33 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do, unsigned i

static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data)
{
- const int tag = scsi_cmd_to_rq(sc)->tag;
+ struct request *const rq = scsi_cmd_to_rq(sc);
struct fnic *fnic = data;
struct fnic_io_req *io_req;
unsigned long flags = 0;
- spinlock_t *io_lock;
unsigned long start_time = 0;
struct fnic_stats *fnic_stats = &fnic->fnic_stats;
+ uint16_t hwq = 0;
+ int tag;
+ int mqtag;
+
+ mqtag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(mqtag);
+ tag = blk_mq_unique_tag_to_tag(mqtag);

- io_lock = fnic_io_lock_tag(fnic, tag);
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
+
+ fnic->sw_copy_wq[hwq].io_req_table[tag] = NULL;

io_req = fnic_priv(sc)->io_req;
+ if (!io_req) {
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
+ FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
+ "fnic<%d>: %s: %d: hwq: %d mqtag: 0x%x tag: 0x%x flags: 0x%x No ioreq. Returning\n",
+ fnic->fnic_num, __func__, __LINE__, hwq, mqtag, tag, fnic_priv(sc)->flags);
+ return true;
+ }
+
if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) &&
!(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) {
/*
@@ -1362,20 +1468,16 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data)
complete(io_req->dr_done);
else if (io_req && io_req->abts_done)
complete(io_req->abts_done);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
} else if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}
- if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
- goto cleanup_scsi_cmd;
- }

fnic_priv(sc)->io_req = NULL;
-
- spin_unlock_irqrestore(io_lock, flags);
+ io_req->sc = NULL;
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

/*
* If there is a scsi_cmnd associated with this io_req, then
@@ -1385,7 +1487,6 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data)
fnic_release_ioreq_buf(fnic, io_req, sc);
mempool_free(io_req, fnic->io_req_pool);

-cleanup_scsi_cmd:
sc->result = DID_TRANSPORT_DISRUPTED << 16;
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"fnic_cleanup_io: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n",
@@ -1396,12 +1497,6 @@ static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data)
else
atomic64_inc(&fnic_stats->io_stats.io_completions);

- /* Complete the command to SCSI */
- if (!(fnic_priv(sc)->flags & FNIC_IO_ISSUED))
- shost_printk(KERN_ERR, fnic->lport->host,
- "Calling done for IO not issued to fw: tag:0x%x sc:0x%p\n",
- tag, sc);
-
FNIC_TRACE(fnic_cleanup_io,
sc->device->host->host_no, tag, sc,
jiffies_to_msecs(jiffies - start_time),
@@ -1430,8 +1525,8 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,
struct fnic_io_req *io_req;
struct scsi_cmnd *sc;
unsigned long flags;
- spinlock_t *io_lock;
unsigned long start_time = 0;
+ uint16_t hwq;

/* get the tag reference */
fcpio_tag_id_dec(&desc->hdr.tag, &id);
@@ -1444,8 +1539,8 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,
if (!sc)
return;

- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
+ hwq = blk_mq_unique_tag_to_hwq(id);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

/* Get the IO context which this desc refers to */
io_req = fnic_priv(sc)->io_req;
@@ -1453,13 +1548,15 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,
/* fnic interrupts are turned off by now */

if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
goto wq_copy_cleanup_scsi_cmd;
}

fnic_priv(sc)->io_req = NULL;
+ io_req->sc = NULL;
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(id)] = NULL;

- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

start_time = io_req->start_time;
fnic_release_ioreq_buf(fnic, io_req, sc);
@@ -1483,9 +1580,10 @@ void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,

static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag,
u32 task_req, u8 *fc_lun,
- struct fnic_io_req *io_req)
+ struct fnic_io_req *io_req,
+ unsigned int hwq)
{
- struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
+ struct vnic_wq_copy *wq = &fnic->hw_copy_wq[hwq];
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
unsigned long flags;

@@ -1498,13 +1596,13 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag,
atomic_inc(&fnic->in_flight);
spin_unlock_irqrestore(&fnic->fnic_lock, flags);

- spin_lock_irqsave(&fnic->wq_copy_lock[0], flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

- if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0])
- free_wq_copy_descs(fnic, wq);
+ if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq])
+ free_wq_copy_descs(fnic, wq, hwq);

if (!vnic_wq_copy_desc_avail(wq)) {
- spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
atomic_dec(&fnic->in_flight);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"fnic_queue_abort_io_req: failure: no descriptors\n");
@@ -1521,7 +1619,7 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag,
atomic64_set(&fnic->fnic_stats.fw_stats.max_fw_reqs,
atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs));

- spin_unlock_irqrestore(&fnic->wq_copy_lock[0], flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
atomic_dec(&fnic->in_flight);

return 0;
@@ -1535,24 +1633,27 @@ struct fnic_rport_abort_io_iter_data {

static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data)
{
+ struct request *const rq = scsi_cmd_to_rq(sc);
struct fnic_rport_abort_io_iter_data *iter_data = data;
struct fnic *fnic = iter_data->fnic;
- int abt_tag = scsi_cmd_to_rq(sc)->tag;
+ int abt_tag = 0;
struct fnic_io_req *io_req;
- spinlock_t *io_lock;
unsigned long flags;
struct reset_stats *reset_stats = &fnic->fnic_stats.reset_stats;
struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats;
struct scsi_lun fc_lun;
enum fnic_ioreq_state old_ioreq_state;
+ uint16_t hwq = 0;
+
+ abt_tag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(abt_tag);

- io_lock = fnic_io_lock_tag(fnic, abt_tag);
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

io_req = fnic_priv(sc)->io_req;

if (!io_req || io_req->port_id != iter_data->port_id) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}

@@ -1561,7 +1662,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data)
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"fnic_rport_exch_reset dev rst not pending sc 0x%p\n",
sc);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}

@@ -1570,7 +1671,7 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data)
* belongs to rport that went away
*/
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}
if (io_req->abts_done) {
@@ -1601,31 +1702,31 @@ static bool fnic_rport_abort_io_iter(struct scsi_cmnd *sc, void *data)
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"fnic_rport_reset_exch: Issuing abts\n");

- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

/* Now queue the abort command to firmware */
int_to_scsilun(sc->device->lun, &fc_lun);

if (fnic_queue_abort_io_req(fnic, abt_tag,
FCPIO_ITMF_ABT_TASK_TERM,
- fc_lun.scsi_lun, io_req)) {
+ fc_lun.scsi_lun, io_req, hwq)) {
/*
* Revert the cmd state back to old state, if
* it hasn't changed in between. This cmd will get
* aborted later by scsi_eh, or cleaned up during
* lun reset
*/
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING)
fnic_priv(sc)->state = old_ioreq_state;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
} else {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET)
fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED;
else
fnic_priv(sc)->flags |= FNIC_IO_INTERNAL_TERM_ISSUED;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
atomic64_inc(&term_stats->terminates);
iter_data->term_cnt++;
}
@@ -1703,7 +1804,6 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
struct fnic *fnic;
struct fnic_io_req *io_req = NULL;
struct fc_rport *rport;
- spinlock_t *io_lock;
unsigned long flags;
unsigned long start_time = 0;
int ret = SUCCESS;
@@ -1713,8 +1813,10 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
struct abort_stats *abts_stats;
struct terminate_stats *term_stats;
enum fnic_ioreq_state old_ioreq_state;
- const int tag = rq->tag;
+ int mqtag;
unsigned long abt_issued_time;
+ uint16_t hwq = 0;
+
DECLARE_COMPLETION_ONSTACK(tm_done);

/* Wait for rport to unblock */
@@ -1724,23 +1826,25 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
lp = shost_priv(sc->device->host);

fnic = lport_priv(lp);
+
+ spin_lock_irqsave(&fnic->fnic_lock, flags);
fnic_stats = &fnic->fnic_stats;
abts_stats = &fnic->fnic_stats.abts_stats;
term_stats = &fnic->fnic_stats.term_stats;

rport = starget_to_rport(scsi_target(sc->device));
- FNIC_SCSI_DBG(KERN_DEBUG,
- fnic->lport->host,
- "Abort Cmd called FCID 0x%x, LUN 0x%llx TAG %x flags %x\n",
- rport->port_id, sc->device->lun, tag, fnic_priv(sc)->flags);
+ mqtag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(mqtag);

fnic_priv(sc)->flags = FNIC_NO_FLAGS;

if (lp->state != LPORT_ST_READY || !(lp->link_up)) {
ret = FAILED;
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
goto fnic_abort_cmd_end;
}

+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
/*
* Avoid a race between SCSI issuing the abort and the device
* completing the command.
@@ -1753,18 +1857,17 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
*
* .io_req will not be cleared except while holding io_req_lock.
*/
- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
goto fnic_abort_cmd_end;
}

io_req->abts_done = &tm_done;

if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
goto wait_pending;
}

@@ -1796,7 +1899,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING;
fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE;

- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

/*
* Check readiness of the remote port. If the path to remote
@@ -1813,15 +1916,15 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
/* Now queue the abort command to firmware */
int_to_scsilun(sc->device->lun, &fc_lun);

- if (fnic_queue_abort_io_req(fnic, tag, task_req, fc_lun.scsi_lun,
- io_req)) {
- spin_lock_irqsave(io_lock, flags);
+ if (fnic_queue_abort_io_req(fnic, mqtag, task_req, fc_lun.scsi_lun,
+ io_req, hwq)) {
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING)
fnic_priv(sc)->state = old_ioreq_state;
io_req = fnic_priv(sc)->io_req;
if (io_req)
io_req->abts_done = NULL;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
ret = FAILED;
goto fnic_abort_cmd_end;
}
@@ -1845,12 +1948,12 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
fnic->config.ed_tov));

/* Check the abort status */
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

io_req = fnic_priv(sc)->io_req;
if (!io_req) {
atomic64_inc(&fnic_stats->io_stats.ioreq_null);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL;
ret = FAILED;
goto fnic_abort_cmd_end;
@@ -1859,7 +1962,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)

/* fw did not complete abort, timed out */
if (fnic_priv(sc)->abts_status == FCPIO_INVALID_CODE) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
if (task_req == FCPIO_ITMF_ABT_TASK) {
atomic64_inc(&abts_stats->abort_drv_timeouts);
} else {
@@ -1873,7 +1976,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
/* IO out of order */

if (!(fnic_priv(sc)->flags & (FNIC_IO_ABORTED | FNIC_IO_DONE))) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"Issuing Host reset due to out of order IO\n");

@@ -1889,15 +1992,18 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
* free the io_req if successful. If abort fails,
* Device reset will clean the I/O.
*/
- if (fnic_priv(sc)->abts_status == FCPIO_SUCCESS) {
+ if (fnic_priv(sc)->abts_status == FCPIO_SUCCESS ||
+ (fnic_priv(sc)->abts_status == FCPIO_ABORTED)) {
fnic_priv(sc)->io_req = NULL;
+ io_req->sc = NULL;
} else {
ret = FAILED;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
goto fnic_abort_cmd_end;
}

- spin_unlock_irqrestore(io_lock, flags);
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] = NULL;
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

fnic_release_ioreq_buf(fnic, io_req, sc);
mempool_free(io_req, fnic->io_req_pool);
@@ -1912,7 +2018,7 @@ int fnic_abort_cmd(struct scsi_cmnd *sc)
atomic64_inc(&fnic_stats->io_stats.io_completions);

fnic_abort_cmd_end:
- FNIC_TRACE(fnic_abort_cmd, sc->device->host->host_no, tag, sc,
+ FNIC_TRACE(fnic_abort_cmd, sc->device->host->host_no, mqtag, sc,
jiffies_to_msecs(jiffies - start_time),
0, ((u64)sc->cmnd[0] << 32 |
(u64)sc->cmnd[2] << 24 | (u64)sc->cmnd[3] << 16 |
@@ -1930,25 +2036,31 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic,
struct scsi_cmnd *sc,
struct fnic_io_req *io_req)
{
- struct vnic_wq_copy *wq = &fnic->hw_copy_wq[0];
+ struct vnic_wq_copy *wq;
struct misc_stats *misc_stats = &fnic->fnic_stats.misc_stats;
struct scsi_lun fc_lun;
int ret = 0;
- unsigned long intr_flags;
+ unsigned long flags;
+ uint16_t hwq = 0;
+ uint32_t tag = 0;

- spin_lock_irqsave(&fnic->fnic_lock, intr_flags);
+ tag = io_req->tag;
+ hwq = blk_mq_unique_tag_to_hwq(tag);
+ wq = &fnic->hw_copy_wq[hwq];
+
+ spin_lock_irqsave(&fnic->fnic_lock, flags);
if (unlikely(fnic_chk_state_flags_locked(fnic,
FNIC_FLAGS_IO_BLOCKED))) {
- spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags);
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);
return FAILED;
} else
atomic_inc(&fnic->in_flight);
- spin_unlock_irqrestore(&fnic->fnic_lock, intr_flags);
+ spin_unlock_irqrestore(&fnic->fnic_lock, flags);

- spin_lock_irqsave(&fnic->wq_copy_lock[0], intr_flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

- if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[0])
- free_wq_copy_descs(fnic, wq);
+ if (vnic_wq_copy_desc_avail(wq) <= fnic->wq_copy_desc_low[hwq])
+ free_wq_copy_descs(fnic, wq, hwq);

if (!vnic_wq_copy_desc_avail(wq)) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
@@ -1973,7 +2085,7 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic,
atomic64_read(&fnic->fnic_stats.fw_stats.active_fw_reqs));

lr_io_req_end:
- spin_unlock_irqrestore(&fnic->wq_copy_lock[0], intr_flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
atomic_dec(&fnic->in_flight);

return ret;
@@ -1988,12 +2100,13 @@ struct fnic_pending_aborts_iter_data {

static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
{
+ struct request *const rq = scsi_cmd_to_rq(sc);
struct fnic_pending_aborts_iter_data *iter_data = data;
struct fnic *fnic = iter_data->fnic;
struct scsi_device *lun_dev = iter_data->lun_dev;
- int abt_tag = scsi_cmd_to_rq(sc)->tag;
+ unsigned long abt_tag = 0;
+ uint16_t hwq = 0;
struct fnic_io_req *io_req;
- spinlock_t *io_lock;
unsigned long flags;
struct scsi_lun fc_lun;
DECLARE_COMPLETION_ONSTACK(tm_done);
@@ -2002,11 +2115,13 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
if (sc == iter_data->lr_sc || sc->device != lun_dev)
return true;

- io_lock = fnic_io_lock_tag(fnic, abt_tag);
- spin_lock_irqsave(io_lock, flags);
+ abt_tag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(abt_tag);
+
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}

@@ -2019,7 +2134,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
fnic_ioreq_state_to_str(fnic_priv(sc)->state));

if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}
if ((fnic_priv(sc)->flags & FNIC_DEVICE_RESET) &&
@@ -2027,7 +2142,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
"%s dev rst not pending sc 0x%p\n", __func__,
sc);
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}

@@ -2048,35 +2163,34 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
BUG_ON(io_req->abts_done);

if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET) {
- abt_tag |= FNIC_TAG_DEV_RST;
FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
"%s: dev rst sc 0x%p\n", __func__, sc);
}

fnic_priv(sc)->abts_status = FCPIO_INVALID_CODE;
io_req->abts_done = &tm_done;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

/* Now queue the abort command to firmware */
int_to_scsilun(sc->device->lun, &fc_lun);

if (fnic_queue_abort_io_req(fnic, abt_tag,
FCPIO_ITMF_ABT_TASK_TERM,
- fc_lun.scsi_lun, io_req)) {
- spin_lock_irqsave(io_lock, flags);
+ fc_lun.scsi_lun, io_req, hwq)) {
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (io_req)
io_req->abts_done = NULL;
if (fnic_priv(sc)->state == FNIC_IOREQ_ABTS_PENDING)
fnic_priv(sc)->state = old_ioreq_state;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
iter_data->ret = FAILED;
return false;
} else {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
if (fnic_priv(sc)->flags & FNIC_DEVICE_RESET)
fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
}
fnic_priv(sc)->flags |= FNIC_IO_INTERNAL_TERM_ISSUED;

@@ -2084,10 +2198,10 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
(fnic->config.ed_tov));

/* Recheck cmd state to check if it is now aborted */
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_REQ_NULL;
return true;
}
@@ -2096,7 +2210,7 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)

/* if abort is still pending with fw, fail */
if (fnic_priv(sc)->abts_status == FCPIO_INVALID_CODE) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_IO_ABT_TERM_DONE;
iter_data->ret = FAILED;
return false;
@@ -2104,9 +2218,11 @@ static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc, void *data)
fnic_priv(sc)->state = FNIC_IOREQ_ABTS_COMPLETE;

/* original sc used for lr is handled by dev reset code */
- if (sc != iter_data->lr_sc)
+ if (sc != iter_data->lr_sc) {
fnic_priv(sc)->io_req = NULL;
- spin_unlock_irqrestore(io_lock, flags);
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(abt_tag)] = NULL;
+ }
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

/* original sc used for lr is handled by dev reset code */
if (sc != iter_data->lr_sc) {
@@ -2178,15 +2294,15 @@ int fnic_device_reset(struct scsi_cmnd *sc)
struct fc_rport *rport;
int status;
int ret = FAILED;
- spinlock_t *io_lock;
unsigned long flags;
unsigned long start_time = 0;
struct scsi_lun fc_lun;
struct fnic_stats *fnic_stats;
struct reset_stats *reset_stats;
- int tag = rq->tag;
+ int mqtag = rq->tag;
DECLARE_COMPLETION_ONSTACK(tm_done);
bool new_sc = 0;
+ uint16_t hwq = 0;

/* Wait for rport to unblock */
fc_block_scsi_eh(sc);
@@ -2216,7 +2332,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)

fnic_priv(sc)->flags = FNIC_DEVICE_RESET;

- if (unlikely(tag < 0)) {
+ if (unlikely(mqtag < 0)) {
/*
* For device reset issued through sg3utils, we let
* only one LUN_RESET to go through and use a special
@@ -2225,11 +2341,14 @@ int fnic_device_reset(struct scsi_cmnd *sc)
* allocated by mid layer.
*/
mutex_lock(&fnic->sgreset_mutex);
- tag = fnic->fnic_max_tag_id;
+ mqtag = fnic->fnic_max_tag_id;
new_sc = 1;
+ } else {
+ mqtag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(mqtag);
}
- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
+
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;

/*
@@ -2239,34 +2358,43 @@ int fnic_device_reset(struct scsi_cmnd *sc)
if (!io_req) {
io_req = mempool_alloc(fnic->io_req_pool, GFP_ATOMIC);
if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
goto fnic_device_reset_end;
}
memset(io_req, 0, sizeof(*io_req));
io_req->port_id = rport->port_id;
+ io_req->tag = mqtag;
fnic_priv(sc)->io_req = io_req;
+ io_req->sc = sc;
+
+ if (fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] != NULL)
+ WARN(1, "fnic<%d>: %s: tag 0x%x already exists\n",
+ fnic->fnic_num, __func__, blk_mq_unique_tag_to_tag(mqtag));
+
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(mqtag)] =
+ io_req;
}
io_req->dr_done = &tm_done;
fnic_priv(sc)->state = FNIC_IOREQ_CMD_PENDING;
fnic_priv(sc)->lr_status = FCPIO_INVALID_CODE;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

- FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", tag);
+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", mqtag);

/*
* issue the device reset, if enqueue failed, clean up the ioreq
* and break assoc with scsi cmd
*/
if (fnic_queue_dr_io_req(fnic, sc, io_req)) {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (io_req)
io_req->dr_done = NULL;
goto fnic_device_reset_clean;
}
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_DEV_RST_ISSUED;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

/*
* Wait on the local completion for LUN reset. The io_req may be
@@ -2275,12 +2403,12 @@ int fnic_device_reset(struct scsi_cmnd *sc)
wait_for_completion_timeout(&tm_done,
msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT));

- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "io_req is null tag 0x%x sc 0x%p\n", tag, sc);
+ "io_req is null mqtag 0x%x sc 0x%p\n", mqtag, sc);
goto fnic_device_reset_end;
}
io_req->dr_done = NULL;
@@ -2296,41 +2424,41 @@ int fnic_device_reset(struct scsi_cmnd *sc)
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"Device reset timed out\n");
fnic_priv(sc)->flags |= FNIC_DEV_RST_TIMED_OUT;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
int_to_scsilun(sc->device->lun, &fc_lun);
/*
* Issue abort and terminate on device reset request.
* If q'ing of terminate fails, retry it after a delay.
*/
while (1) {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
if (fnic_priv(sc)->flags & FNIC_DEV_RST_TERM_ISSUED) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
break;
}
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
if (fnic_queue_abort_io_req(fnic,
- tag | FNIC_TAG_DEV_RST,
+ mqtag | FNIC_TAG_DEV_RST,
FCPIO_ITMF_ABT_TASK_TERM,
- fc_lun.scsi_lun, io_req)) {
+ fc_lun.scsi_lun, io_req, hwq)) {
wait_for_completion_timeout(&tm_done,
msecs_to_jiffies(FNIC_ABT_TERM_DELAY_TIMEOUT));
} else {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
fnic_priv(sc)->flags |= FNIC_DEV_RST_TERM_ISSUED;
fnic_priv(sc)->state = FNIC_IOREQ_ABTS_PENDING;
io_req->abts_done = &tm_done;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
- "Abort and terminate issued on Device reset "
- "tag 0x%x sc 0x%p\n", tag, sc);
+ "Abort and terminate issued on Device reset mqtag 0x%x sc 0x%p\n",
+ mqtag, sc);
break;
}
}
while (1) {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
if (!(fnic_priv(sc)->flags & FNIC_DEV_RST_DONE)) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
wait_for_completion_timeout(&tm_done,
msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT));
break;
@@ -2341,12 +2469,12 @@ int fnic_device_reset(struct scsi_cmnd *sc)
}
}
} else {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
}

/* Completed, but not successful, clean up the io_req, return fail */
if (status != FCPIO_SUCCESS) {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
FNIC_SCSI_DBG(KERN_DEBUG,
fnic->lport->host,
"Device reset completed - failed\n");
@@ -2362,7 +2490,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)
* succeeds
*/
if (fnic_clean_pending_aborts(fnic, sc, new_sc)) {
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"Device reset failed"
@@ -2371,17 +2499,20 @@ int fnic_device_reset(struct scsi_cmnd *sc)
}

/* Clean lun reset command */
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);
io_req = fnic_priv(sc)->io_req;
if (io_req)
/* Completed, and successful */
ret = SUCCESS;

fnic_device_reset_clean:
- if (io_req)
+ if (io_req) {
fnic_priv(sc)->io_req = NULL;
+ io_req->sc = NULL;
+ fnic->sw_copy_wq[hwq].io_req_table[blk_mq_unique_tag_to_tag(io_req->tag)] = NULL;
+ }

- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);

if (io_req) {
start_time = io_req->start_time;
@@ -2619,12 +2750,17 @@ void fnic_exch_mgr_reset(struct fc_lport *lp, u32 sid, u32 did)

static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data)
{
+ struct request *const rq = scsi_cmd_to_rq(sc);
struct fnic_pending_aborts_iter_data *iter_data = data;
struct fnic *fnic = iter_data->fnic;
int cmd_state;
struct fnic_io_req *io_req;
- spinlock_t *io_lock;
unsigned long flags;
+ uint16_t hwq = 0;
+ int tag;
+
+ tag = blk_mq_unique_tag(rq);
+ hwq = blk_mq_unique_tag_to_hwq(tag);

/*
* ignore this lun reset cmd or cmds that do not belong to
@@ -2635,12 +2771,11 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data)
if (iter_data->lun_dev && sc->device != iter_data->lun_dev)
return true;

- io_lock = fnic_io_lock_hash(fnic, sc);
- spin_lock_irqsave(io_lock, flags);
+ spin_lock_irqsave(&fnic->wq_copy_lock[hwq], flags);

io_req = fnic_priv(sc)->io_req;
if (!io_req) {
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
return true;
}

@@ -2652,7 +2787,7 @@ static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data)
"Found IO in %s on lun\n",
fnic_ioreq_state_to_str(fnic_priv(sc)->state));
cmd_state = fnic_priv(sc)->state;
- spin_unlock_irqrestore(io_lock, flags);
+ spin_unlock_irqrestore(&fnic->wq_copy_lock[hwq], flags);
if (cmd_state == FNIC_IOREQ_ABTS_PENDING)
iter_data->ret = 1;

--
2.31.1

2023-10-25 10:53:31

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 07/13] scsi: fnic: Modify ISRs to support multiqueue(MQ)

Hi Karan,

kernel test robot noticed the following build warnings:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on jejb-scsi/for-next next-20231025]
[cannot apply to linus/master v6.6-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Karan-Tilak-Kumar/scsi-fnic-Modify-definitions-to-sync-with-VIC-firmware/20231021-031259
base: https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
patch link: https://lore.kernel.org/r/20231020190629.338623-8-kartilak%40cisco.com
patch subject: [PATCH 07/13] scsi: fnic: Modify ISRs to support multiqueue(MQ)
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20231025/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231025/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> drivers/scsi/fnic/fnic_isr.c:242:5: warning: no previous prototype for 'fnic_set_intr_mode_msix' [-Wmissing-prototypes]
242 | int fnic_set_intr_mode_msix(struct fnic *fnic)
| ^~~~~~~~~~~~~~~~~~~~~~~


vim +/fnic_set_intr_mode_msix +242 drivers/scsi/fnic/fnic_isr.c

241
> 242 int fnic_set_intr_mode_msix(struct fnic *fnic)
243 {
244 unsigned int n = ARRAY_SIZE(fnic->rq);
245 unsigned int m = ARRAY_SIZE(fnic->wq);
246 unsigned int o = ARRAY_SIZE(fnic->hw_copy_wq);
247 unsigned int min_irqs = n + m + 1 + 1; /*rq, raw wq, wq, err*/
248
249 /*
250 * We need n RQs, m WQs, o Copy WQs, n+m+o CQs, and n+m+o+1 INTRs
251 * (last INTR is used for WQ/RQ errors and notification area)
252 */
253 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
254 "fnic<%d>: %s: %d: rq-array size: %d wq-array size: %d copy-wq array size: %d\n",
255 fnic->fnic_num, __func__, __LINE__, n, m, o);
256 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
257 "fnic<%d>: %s: %d: rq_count: %d raw_wq_count: %d wq_copy_count: %d cq_count: %d\n",
258 fnic->fnic_num, __func__, __LINE__, fnic->rq_count, fnic->raw_wq_count,
259 fnic->wq_copy_count, fnic->cq_count);
260
261 if (fnic->rq_count <= n && fnic->raw_wq_count <= m &&
262 fnic->wq_copy_count <= o) {
263 int vec_count = 0;
264 int vecs = fnic->rq_count + fnic->raw_wq_count + fnic->wq_copy_count + 1;
265
266 vec_count = pci_alloc_irq_vectors(fnic->pdev, min_irqs, vecs,
267 PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
268 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
269 "fnic<%d>: %s: %d: allocated %d MSI-X vectors\n",
270 fnic->fnic_num, __func__, __LINE__, vec_count);
271
272 if (vec_count > 0) {
273 if (vec_count < vecs) {
274 FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
275 "fnic<%d>: %s: %d: interrupts number mismatch: vec_count: %d vecs: %d\n",
276 fnic->fnic_num, __func__, __LINE__, vec_count, vecs);
277 if (vec_count < min_irqs) {
278 FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
279 "fnic<%d>: %s: %d: no interrupts for copy wq\n",
280 fnic->fnic_num, __func__, __LINE__);
281 return 1;
282 }
283 }
284
285 fnic->rq_count = n;
286 fnic->raw_wq_count = m;
287 fnic->cpy_wq_base = fnic->rq_count + fnic->raw_wq_count;
288 fnic->wq_copy_count = vec_count - n - m - 1;
289 fnic->wq_count = fnic->raw_wq_count + fnic->wq_copy_count;
290 if (fnic->cq_count != vec_count - 1) {
291 FNIC_ISR_DBG(KERN_ERR, fnic->lport->host,
292 "fnic<%d>: %s: %d: CQ count: %d does not match MSI-X vector count: %d\n",
293 fnic->fnic_num, __func__, __LINE__, fnic->cq_count, vec_count);
294 fnic->cq_count = vec_count - 1;
295 }
296 fnic->intr_count = vec_count;
297 fnic->err_intr_offset = fnic->rq_count + fnic->wq_count;
298
299 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
300 "fnic<%d>: %s: %d: rq_count: %d raw_wq_count: %d cpy_wq_base: %d\n",
301 fnic->fnic_num, __func__, __LINE__, fnic->rq_count,
302 fnic->raw_wq_count, fnic->cpy_wq_base);
303
304 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
305 "fnic<%d>: %s: %d: wq_copy_count: %d wq_count: %d cq_count: %d\n",
306 fnic->fnic_num, __func__, __LINE__, fnic->wq_copy_count,
307 fnic->wq_count, fnic->cq_count);
308
309 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
310 "fnic<%d>: %s: %d: intr_count: %d err_intr_offset: %u",
311 fnic->fnic_num, __func__, __LINE__, fnic->intr_count,
312 fnic->err_intr_offset);
313
314 vnic_dev_set_intr_mode(fnic->vdev, VNIC_DEV_INTR_MODE_MSIX);
315 FNIC_ISR_DBG(KERN_INFO, fnic->lport->host,
316 "fnic<%d>: %s: %d: fnic using MSI-X\n", fnic->fnic_num,
317 __func__, __LINE__);
318 return 0;
319 }
320 }
321 return 1;
322 } //fnic_set_intr_mode_msix
323

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-25 18:19:59

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 12/13] scsi: fnic: Add support for multiqueue (MQ) in fnic driver

On 10/20/23 12:06, Karan Tilak Kumar wrote:
> @@ -405,22 +388,20 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
> * Routine to send a scsi cdb
> * Called with host_lock held and interrupts disabled.
> */

This patch removes the DEF_SCSI_QCMD() invocation so the above comment
is now wrong. Please fix it.

> -static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
> +static int fnic_queuecommand_lck(struct scsi_cmnd *sc, uint32_t mqtag, uint16_t hwq)

Because this patch removes the DEF_SCSI_QCMD() invocation, the name of
the fnic_queuecommand_lck() function is now incorrect. Please remove the
_lck() suffix.

> -DEF_SCSI_QCMD(fnic_queuecommand)
> +int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc)
> +{
> + struct request *const rq = scsi_cmd_to_rq(sc);
> + int rc;
> + struct fc_lport *lp = shost_priv(sc->device->host);
> + struct fnic *fnic = lport_priv(lp);
> + uint32_t mqtag = 0;
> + int tag = 0;
> + uint16_t hwq = 0;
> +
> + mqtag = blk_mq_unique_tag(rq);
> + hwq = blk_mq_unique_tag_to_hwq(mqtag);
> + tag = blk_mq_unique_tag_to_tag(mqtag);
> +
> + if (tag >= fnic->fnic_max_tag_id) {
> + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
> + "fnic<%d>: %s: Out of range tag: 0x%x\n",
> + fnic->fnic_num, __func__, tag);
> + sc->result = DID_ERROR << 16;
> + scsi_done(sc);
> + return 0;
> + }
> +
> + rc = fnic_queuecommand_lck(sc, mqtag, hwq);
> + return rc;
> +}

The code guarded by "if (tag >= fnic->fnic_max_tag_id)" should be
removed and instead the driver should ensure that tags never exceed the
fnic->fnic_max_tag_id limit.

Please also remove the local variable 'rc'.

Thanks,

Bart.

Subject: RE: [PATCH 12/13] scsi: fnic: Add support for multiqueue (MQ) in fnic driver

Hi Bart,

Thanks for your review comments. I'll fix the issues.

Regards,
Karan

-----Original Message-----
From: Bart Van Assche <[email protected]>
Sent: Wednesday, October 25, 2023 11:20 AM
To: Karan Tilak Kumar (kartilak) <[email protected]>; Sesidhar Baddela (sebaddel) <[email protected]>
Cc: Arulprabhu Ponnusamy (arulponn) <[email protected]>; Dhanraj Jhawar (djhawar) <[email protected]>; Gian Carlo Boffa (gcboffa) <[email protected]>; Masa Kai (mkai2) <[email protected]>; Satish Kharat (satishkh) <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]
Subject: Re: [PATCH 12/13] scsi: fnic: Add support for multiqueue (MQ) in fnic driver

On 10/20/23 12:06, Karan Tilak Kumar wrote:
> @@ -405,22 +388,20 @@ static inline int fnic_queue_wq_copy_desc(struct fnic *fnic,
> * Routine to send a scsi cdb
> * Called with host_lock held and interrupts disabled.
> */

This patch removes the DEF_SCSI_QCMD() invocation so the above comment
is now wrong. Please fix it.

> -static int fnic_queuecommand_lck(struct scsi_cmnd *sc)
> +static int fnic_queuecommand_lck(struct scsi_cmnd *sc, uint32_t mqtag, uint16_t hwq)

Because this patch removes the DEF_SCSI_QCMD() invocation, the name of
the fnic_queuecommand_lck() function is now incorrect. Please remove the
_lck() suffix.

> -DEF_SCSI_QCMD(fnic_queuecommand)
> +int fnic_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *sc)
> +{
> + struct request *const rq = scsi_cmd_to_rq(sc);
> + int rc;
> + struct fc_lport *lp = shost_priv(sc->device->host);
> + struct fnic *fnic = lport_priv(lp);
> + uint32_t mqtag = 0;
> + int tag = 0;
> + uint16_t hwq = 0;
> +
> + mqtag = blk_mq_unique_tag(rq);
> + hwq = blk_mq_unique_tag_to_hwq(mqtag);
> + tag = blk_mq_unique_tag_to_tag(mqtag);
> +
> + if (tag >= fnic->fnic_max_tag_id) {
> + FNIC_SCSI_DBG(KERN_ERR, fnic->lport->host,
> + "fnic<%d>: %s: Out of range tag: 0x%x\n",
> + fnic->fnic_num, __func__, tag);
> + sc->result = DID_ERROR << 16;
> + scsi_done(sc);
> + return 0;
> + }
> +
> + rc = fnic_queuecommand_lck(sc, mqtag, hwq);
> + return rc;
> +}

The code guarded by "if (tag >= fnic->fnic_max_tag_id)" should be
removed and instead the driver should ensure that tags never exceed the
fnic->fnic_max_tag_id limit.

Please also remove the local variable 'rc'.

Thanks,

Bart.