Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp160110pxu; Wed, 2 Dec 2020 18:14:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJy+8cOKpO6EVBNbAdURCXIiE54j8dmQamGtEwwx+zfNvLddGFE2YuhaMmPOdyhVHdGgR0oT X-Received: by 2002:aa7:c58a:: with SMTP id g10mr839238edq.315.1606961677009; Wed, 02 Dec 2020 18:14:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606961677; cv=none; d=google.com; s=arc-20160816; b=W1sPFfurA5vA4DjrsTqiabJOz4AAiarVgirbBEC4PE9OI8PZrcSbn4A/KA/ULTLcEi 3DMvucGCIfWbuYvxxpBgtG+uJc3UkRWdCF1+zTC9C9Fr4Mhs2r8ii4hG7xICeArs48kI m+iN/LBm1SI4mCDyA30yPSydzUKHMoChIpigqSlJON6av0i0Fn30ORdD5928UITFzezB pwAFOX7vCw81BLtSk/ap5XTr3Oc02pdb4gwJd1UN3cA6ByaPlU6AN6z57wfvcsz6104j u5JlfyN/nA7Hck03ZekCud/g9ogT12U6LsXrjvTtg869Cf2QK294pXTqx6wq1kAif12F coSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2Uh9hTMUc0YdAyAUP0RhYH8A17ttsv2M0nBTdSYt2zo=; b=SlGkRpci5i3vNuuplSivW0eisOMpZ8SEKm8O399LB1vg+Cgzp/4MKkCsIle50tX0gC em0QGRudbGUTu9MxceYLyYQACqH3Ibn6At8Hbu/6VJ/C/ie9AAUJT+An8m06tm0+yzCM kaNEYJOLlwRBZGpVM5kUMQHN6uSnwixZQ+Jhgc53H+Yo/1qC4TiJrt4HQiTB85QSxSjV j/34r8lXiv5dSLHIvWx+88a7d8sfxwbz/agP6YOXBWDHKGQn0V4hDu9la24eqo6N8Qxa NhhR5RHD6JK58slosat56vUdGZ9NXWJrQFoX321w4/KVBJZPtcBRPcKlf5iuADssvKKd lUlQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b="WKTh/3A2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w14si281207ejz.314.2020.12.02.18.14.14; Wed, 02 Dec 2020 18:14:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b="WKTh/3A2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729611AbgLCCJm (ORCPT + 99 others); Wed, 2 Dec 2020 21:09:42 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:13160 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728151AbgLCCJB (ORCPT ); Wed, 2 Dec 2020 21:09:01 -0500 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0B326YKE065479; Wed, 2 Dec 2020 21:08:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=2Uh9hTMUc0YdAyAUP0RhYH8A17ttsv2M0nBTdSYt2zo=; b=WKTh/3A2LvWOQ6/ruEvyr10tj0H1Ta6aCR5S6/TpWQkjTtSsuL+nZXhrbtH18j846zxB FpwsranvkRXQhYUymNx6kthvj2nPlmslg43qE8n02aotve+1XYlxnPWrE8fEju/qC4KK 4xNzSb4Hnh2XJMxb60O2wCH6cxauQU/NP5QQ4w6pjnkSmN7LsJfhTgReRkZigNk5lKFL X7/K99jnm1wTYx0sdfmGexlxeZ9SA0XUgz8njFYEr9HAmhp8kabdGx8KajIESssd0DZA 8PM/5D3JLKNVw3DuTQpwT74eaJw46WXvwVjiFJEr8G6T6SBgEFo77DanbOSVUXbdePFH LQ== Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0a-001b2d01.pphosted.com with ESMTP id 356jg86uh9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 02 Dec 2020 21:08:14 -0500 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0B31xdPP013019; Thu, 3 Dec 2020 02:08:13 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma02wdc.us.ibm.com with ESMTP id 356cbem9py-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Dec 2020 02:08:13 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0B328CpU22086060 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Dec 2020 02:08:12 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 179C678063; Thu, 3 Dec 2020 02:08:12 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8CD767805C; Thu, 3 Dec 2020 02:08:11 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Thu, 3 Dec 2020 02:08:11 +0000 (GMT) From: Tyrel Datwyler To: james.bottomley@hansenpartnership.com Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, brking@linux.ibm.com, Tyrel Datwyler Subject: [PATCH v3 04/18] ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels Date: Wed, 2 Dec 2020 20:07:52 -0600 Message-Id: <20201203020806.14747-5-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201203020806.14747-1-tyreld@linux.ibm.com> References: <20201203020806.14747-1-tyreld@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737 definitions=2020-12-02_14:2020-11-30,2020-12-02 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 impostorscore=0 phishscore=0 suspectscore=3 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012030009 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Allocate a set of Sub-CRQs in advance. During channel setup the client and VIOS negotiate the number of queues the VIOS supports and the number that the client desires to request. Its possible that the final channel resources allocated is less than requested, but the client is still responsible for sending handles for every queue it is hoping for. Also, provide deallocation cleanup routines. Signed-off-by: Tyrel Datwyler --- drivers/scsi/ibmvscsi/ibmvfc.c | 129 +++++++++++++++++++++++++++++++++ drivers/scsi/ibmvscsi/ibmvfc.h | 1 + 2 files changed, 130 insertions(+) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 64674054dbae..f879be666c84 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -793,6 +793,8 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost) unsigned long flags; struct vio_dev *vdev = to_vio_dev(vhost->dev); struct ibmvfc_crq_queue *crq = &vhost->crq; + struct ibmvfc_sub_queue *scrq; + int i; /* Close the CRQ */ do { @@ -809,6 +811,14 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost) memset(crq->msgs, 0, PAGE_SIZE); crq->cur = 0; + if (vhost->scsi_scrqs.scrqs) { + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) { + scrq = &vhost->scsi_scrqs.scrqs[i]; + memset(scrq->msgs, 0, PAGE_SIZE); + scrq->cur = 0; + } + } + /* And re-open it again */ rc = plpar_hcall_norets(H_REG_CRQ, vdev->unit_address, crq->msg_token, PAGE_SIZE); @@ -4983,6 +4993,118 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost) return retrc; } +static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost, + int index) +{ + struct device *dev = vhost->dev; + struct vio_dev *vdev = to_vio_dev(dev); + struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index]; + int rc = -ENOMEM; + + ENTER; + + scrq->msgs = (struct ibmvfc_sub_crq *)get_zeroed_page(GFP_KERNEL); + if (!scrq->msgs) + return rc; + + scrq->size = PAGE_SIZE / sizeof(*scrq->msgs); + scrq->msg_token = dma_map_single(dev, scrq->msgs, PAGE_SIZE, + DMA_BIDIRECTIONAL); + + if (dma_mapping_error(dev, scrq->msg_token)) + goto dma_map_failed; + + rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE, + &scrq->cookie, &scrq->hw_irq); + + if (rc) { + dev_warn(dev, "Error registering sub-crq: %d\n", rc); + if (rc == H_PARAMETER) + dev_warn_once(dev, "Firmware may not support MQ\n"); + goto reg_failed; + } + + scrq->hwq_id = index; + scrq->vhost = vhost; + + LEAVE; + return 0; + +reg_failed: + dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL); +dma_map_failed: + free_page((unsigned long)scrq->msgs); + LEAVE; + return rc; +} + +static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index) +{ + struct device *dev = vhost->dev; + struct vio_dev *vdev = to_vio_dev(dev); + struct ibmvfc_sub_queue *scrq = &vhost->scsi_scrqs.scrqs[index]; + long rc; + + ENTER; + + do { + rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, + scrq->cookie); + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); + + if (rc) + dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc); + + dma_unmap_single(dev, scrq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL); + free_page((unsigned long)scrq->msgs); + LEAVE; +} + +static int ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost) +{ + int i, j; + + ENTER; + + vhost->scsi_scrqs.scrqs = kcalloc(IBMVFC_SCSI_HW_QUEUES, + sizeof(*vhost->scsi_scrqs.scrqs), + GFP_KERNEL); + if (!vhost->scsi_scrqs.scrqs) + return -1; + + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) { + if (ibmvfc_register_scsi_channel(vhost, i)) { + for (j = i; j > 0; j--) + ibmvfc_deregister_scsi_channel(vhost, j - 1); + kfree(vhost->scsi_scrqs.scrqs); + vhost->scsi_scrqs.scrqs = NULL; + vhost->scsi_scrqs.active_queues = 0; + LEAVE; + return -1; + } + } + + LEAVE; + return 0; +} + +static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) +{ + int i; + + ENTER; + if (!vhost->scsi_scrqs.scrqs) + return; + + for (i = 0; i < IBMVFC_SCSI_HW_QUEUES; i++) + ibmvfc_deregister_scsi_channel(vhost, i); + + kfree(vhost->scsi_scrqs.scrqs); + vhost->scsi_scrqs.scrqs = NULL; + vhost->scsi_scrqs.active_queues = 0; + LEAVE; +} + /** * ibmvfc_free_mem - Free memory for vhost * @vhost: ibmvfc host struct @@ -5239,6 +5361,12 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id) goto remove_shost; } + if (vhost->mq_enabled) { + rc = ibmvfc_init_sub_crqs(vhost); + if (rc) + dev_warn(dev, "Failed to allocate Sub-CRQs. rc=%d\n", rc); + } + if (shost_to_fc_host(shost)->rqst_q) blk_queue_max_segments(shost_to_fc_host(shost)->rqst_q, 1); dev_set_drvdata(dev, vhost); @@ -5296,6 +5424,7 @@ static int ibmvfc_remove(struct vio_dev *vdev) ibmvfc_purge_requests(vhost, DID_ERROR); spin_unlock_irqrestore(vhost->host->host_lock, flags); ibmvfc_free_event_pool(vhost); + ibmvfc_release_sub_crqs(vhost); ibmvfc_free_mem(vhost); spin_lock(&ibmvfc_driver_lock); diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index b3cd35cbf067..986ce4530382 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -838,6 +838,7 @@ struct ibmvfc_host { mempool_t *tgt_pool; struct ibmvfc_crq_queue crq; struct ibmvfc_async_crq_queue async_crq; + struct ibmvfc_scsi_channels scsi_scrqs; struct ibmvfc_npiv_login login_info; union ibmvfc_npiv_login_data *login_buf; dma_addr_t login_buf_dma; -- 2.27.0