2020-11-26 01:53:32

by Tyrel Datwyler

[permalink] [raw]
Subject: [PATCH 00/13] ibmvfc: initial MQ development

Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.

This initial implementation adds the necessary Sub-CRQ framework and implements
the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
HW backed channels. The event pool and locking still leverages the legacy single
queue implementation, and as such lock contention is problematic when increasing
the number of queues. However, this initial work demonstrates a 1.2x factor
increase in IOPs when configured with two HW queues despite lock contention.

Tyrel Datwyler (13):
ibmvfc: add vhost fields and defaults for MQ enablement
ibmvfc: define hcall wrapper for registering a Sub-CRQ
ibmvfc: add Subordinate CRQ definitions
ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels
ibmvfc: add Sub-CRQ IRQ enable/disable routine
ibmvfc: add handlers to drain and complete Sub-CRQ responses
ibmvfc: define Sub-CRQ interrupt handler routine
ibmvfc: map/request irq and register Sub-CRQ interrupt handler
ibmvfc: implement channel enquiry and setup commands
ibmvfc: advertise client support for using hardware channels
ibmvfc: set and track hw queue in ibmvfc_event struct
ibmvfc: send commands down HW Sub-CRQ when channelized
ibmvfc: register Sub-CRQ handles with VIOS during channel setup

drivers/scsi/ibmvscsi/ibmvfc.c | 460 ++++++++++++++++++++++++++++++++-
drivers/scsi/ibmvscsi/ibmvfc.h | 37 +++
2 files changed, 493 insertions(+), 4 deletions(-)

--
2.27.0


2020-11-26 01:55:21

by Tyrel Datwyler

[permalink] [raw]
Subject: [PATCH 10/13] ibmvfc: advertise client support for using hardware channels

Previous patches have plumbed the necessary Sub-CRQ interface and
channel negotiation MADs to fully channelized hardware queues.

Advertise client support via NPIV Login capability
IBMVFC_CAN_USE_CHANNELS when the client bits have MQ enabled via
vhost->mq_enabled, or when channels were already in use during a
subsequent NPIV Login. The later is required because channel support is
only renegotiated after a CRQ pair is broken. Simple NPIV Logout/Logins
require the client to continue to advertise the channel capability until
the CRQ pair between the client is broken.

Signed-off-by: Tyrel Datwyler <[email protected]>
---
drivers/scsi/ibmvscsi/ibmvfc.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index 40a945712bdb..55893d09f883 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -1272,6 +1272,10 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost)

login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ);
login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE | IBMVFC_CAN_SEND_VF_WWPN);
+
+ if (vhost->mq_enabled || vhost->using_channels)
+ login_info->capabilities |= cpu_to_be64(IBMVFC_CAN_USE_CHANNELS);
+
login_info->async.va = cpu_to_be64(vhost->async_crq.msg_token);
login_info->async.len = cpu_to_be32(vhost->async_crq.size * sizeof(*vhost->async_crq.msgs));
strncpy(login_info->partition_name, vhost->partition_name, IBMVFC_MAX_NAME);
--
2.27.0

2020-11-27 17:53:06

by Brian King

[permalink] [raw]
Subject: Re: [PATCH 10/13] ibmvfc: advertise client support for using hardware channels

Reviewed-by: Brian King <[email protected]>


--
Brian King
Power Linux I/O
IBM Linux Technology Center

2020-12-02 12:08:16

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 00/13] ibmvfc: initial MQ development

On 11/26/20 2:48 AM, Tyrel Datwyler wrote:
> Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
> towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
> Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
> partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
> negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.
>
> This initial implementation adds the necessary Sub-CRQ framework and implements
> the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
> HW backed channels. The event pool and locking still leverages the legacy single
> queue implementation, and as such lock contention is problematic when increasing
> the number of queues. However, this initial work demonstrates a 1.2x factor
> increase in IOPs when configured with two HW queues despite lock contention.
>
Why do you still hold the hold lock during submission?
An initial check on the submission code path didn't reveal anything
obvious, so it _should_ be possible to drop the host lock there.
Or at least move it into the submission function itself to avoid lock
contention. Hmm?

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

2020-12-02 17:24:29

by Tyrel Datwyler

[permalink] [raw]
Subject: Re: [PATCH 00/13] ibmvfc: initial MQ development

On 12/2/20 4:03 AM, Hannes Reinecke wrote:
> On 11/26/20 2:48 AM, Tyrel Datwyler wrote:
>> Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
>> towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
>> Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
>> partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
>> negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.
>>
>> This initial implementation adds the necessary Sub-CRQ framework and implements
>> the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
>> HW backed channels. The event pool and locking still leverages the legacy single
>> queue implementation, and as such lock contention is problematic when increasing
>> the number of queues. However, this initial work demonstrates a 1.2x factor
>> increase in IOPs when configured with two HW queues despite lock contention.
>>
> Why do you still hold the hold lock during submission?

Proof of concept.

> An initial check on the submission code path didn't reveal anything obvious, so
> it _should_ be possible to drop the host lock there.

Its used to protect the event pool and the event free/sent lists. This could
probably have its own lock instead of the host lock.

> Or at least move it into the submission function itself to avoid lock
> contention. Hmm?

I have a followup patch to do that, but I didn't see any change in performance.
I've got another patch I'm finishing that provides dedicated event pools for
each subqueue such that they will no longer have any dependency on the host lock.

-Tyrel

>
> Cheers,
>
> Hannes