Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2506623pxu; Fri, 18 Dec 2020 15:22:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJyXQvknT0vXS3vfXubsYAgZ3d+qH7nr9pU5GtGqOzAhLT+ewaF1A+aameeisud6HkXtyMkb X-Received: by 2002:aa7:d886:: with SMTP id u6mr6812100edq.139.1608333749475; Fri, 18 Dec 2020 15:22:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608333749; cv=none; d=google.com; s=arc-20160816; b=bmphbceva46Iw5UwY4Y+MsEg5elr1B4gn6jia/PyNKV5/ySfdbVoDrfSxa8I/tGdp2 XMNNBWC9IBpZ8gLpY+N1pRddRBGgZEIoCqDQUmej6/XIHV26R41gKpUwo0duiuwGGbMP vVH+2ABXkEZjFnMDBKa0ePpeI28iXUtLkmG4q+2Hd3hHD4lO5NvVpt2raCPaR8IOcibM mHgKUGaVZ+uL8RS84J5f7bPKkPCIGx7iclAHGv2OVoqkfaA/tzFpAw9o4XOUx34LxCl2 02L5F4J3UJydjF0I+h6K35vGu0PTjkdv31YVysSKolGfkFAf4QKd85A3t2Hu3ZxP/OBc oqOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zk4fwQ1mpTtaAgsE4A7FWK1EOTJDm11omUPkZUiLBb4=; b=MftWvQMIkp+ouVapXN2jOtS5iJ2fzaCrreU8zdIRZIczowh25ClfIbPW5D1YJEMJne afTpEc+tDLkqPocxPiHTRPGXOO9mbmndOruF8e3lgi6z4YQEy+xM+zZ8CX+0ItI6I2qX d2JkXzSrxlzGfvoacwDa1l9HIGYTT3yaDEsTk2j3ongkjtLXrs2dpnfXzDBBWXqV0KvD wtxDzoOD3+Nd66WGiHjfna3+I1KpOU3NJCtuS1ndVYBHfl7s8GAt95MQKE0jQAuaX5sr nrrFNT4isheGQuRo8+47i1dGDDUwDq7WQpjhc9OJ5e51KFa868010lC3GaJq6XK9ETrs uFBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=ITP1zEAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k27si5407975eji.193.2020.12.18.15.22.06; Fri, 18 Dec 2020 15:22:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=ITP1zEAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726298AbgLRXU0 (ORCPT + 99 others); Fri, 18 Dec 2020 18:20:26 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42272 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725838AbgLRXUP (ORCPT ); Fri, 18 Dec 2020 18:20:15 -0500 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0BIN31VT070422; Fri, 18 Dec 2020 18:19:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=zk4fwQ1mpTtaAgsE4A7FWK1EOTJDm11omUPkZUiLBb4=; b=ITP1zEAZbZ83P6yIbHeqKllAKxTol90Zazm6/gclomRvUYT5hRJi13vJbiLp5RywiRnd BosAx4bdNPPu9iJt7UGbDwHtx2fnCHbx2d0aEDKaVG7wgit2rKS3OQiFNbQRTUrmubM3 XDX3bcDlM9mPsafgBVyodYWAbEWD2VezTVzdHmCWxkh5qTTIW6/bBnLVkrWL5WZlp8iH fYcMpO7EQvcLckY6Fk3dmswvCVCBew/h21jYBZ+ZFVYZsa4z3BBgitvzGP3dzT5JTjOX d8Pdfq5vlvkHXjUZQIZszbvvjC3C1OtG5BRNRtOEnavQt558CE/SXTGpLI0+vGJfuyuX Gw== Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11]) by mx0a-001b2d01.pphosted.com with ESMTP id 35h4a9j7f2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 18 Dec 2020 18:19:26 -0500 Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1]) by ppma03dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0BINH1sA007149; Fri, 18 Dec 2020 23:19:25 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma03dal.us.ibm.com with ESMTP id 35cngafehk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 18 Dec 2020 23:19:25 +0000 Received: from b03ledav002.gho.boulder.ibm.com (b03ledav002.gho.boulder.ibm.com [9.17.130.233]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0BINJOHm27656534 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Dec 2020 23:19:24 GMT Received: from b03ledav002.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E908D136051; Fri, 18 Dec 2020 23:19:23 +0000 (GMT) Received: from b03ledav002.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 94A5613604F; Fri, 18 Dec 2020 23:19:23 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by b03ledav002.gho.boulder.ibm.com (Postfix) with ESMTP; Fri, 18 Dec 2020 23:19:23 +0000 (GMT) From: Tyrel Datwyler To: james.bottomley@hansenpartnership.com Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, brking@linux.ibm.com, Tyrel Datwyler , Brian King Subject: [PATCH 1/5] ibmvfc: define generic queue structure for CRQs Date: Fri, 18 Dec 2020 17:19:12 -0600 Message-Id: <20201218231916.279833-2-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201218231916.279833-1-tyreld@linux.ibm.com> References: <20201218231916.279833-1-tyreld@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2020-12-18_14:2020-12-18,2020-12-18 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 malwarescore=0 spamscore=0 clxscore=1015 bulkscore=0 priorityscore=1501 adultscore=0 mlxscore=0 lowpriorityscore=0 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2012180153 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The primary and async CRQs are nearly identical outside of the format and length of each message entry in the dma mapped page that represents the queue data. These queues can be represented with a generic queue structure that uses a union to differentiate between message format of the mapped page. This structure will further be leveraged in a followup patcheset that introduce Sub-CRQs. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 135 +++++++++++++++++++++------------ drivers/scsi/ibmvscsi/ibmvfc.h | 34 +++++---- 2 files changed, 107 insertions(+), 62 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 42e4d35e0d35..c8e7c4701ac4 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -660,7 +660,7 @@ static void ibmvfc_init_host(struct ibmvfc_host *vhost) } if (!ibmvfc_set_host_state(vhost, IBMVFC_INITIALIZING)) { - memset(vhost->async_crq.msgs, 0, PAGE_SIZE); + memset(vhost->async_crq.msgs.async, 0, PAGE_SIZE); vhost->async_crq.cur = 0; list_for_each_entry(tgt, &vhost->targets, queue) @@ -713,6 +713,23 @@ static int ibmvfc_send_crq_init_complete(struct ibmvfc_host *vhost) return ibmvfc_send_crq(vhost, 0xC002000000000000LL, 0); } +/** + * ibmvfc_free_queue - Deallocate queue + * @vhost: ibmvfc host struct + * @queue: ibmvfc queue struct + * + * Unmaps dma and deallocates page for messages + **/ +static void ibmvfc_free_queue(struct ibmvfc_host *vhost, + struct ibmvfc_queue *queue) +{ + struct device *dev = vhost->dev; + + dma_unmap_single(dev, queue->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL); + free_page((unsigned long)queue->msgs.handle); + queue->msgs.handle = NULL; +} + /** * ibmvfc_release_crq_queue - Deallocates data and unregisters CRQ * @vhost: ibmvfc host struct @@ -724,7 +741,7 @@ static void ibmvfc_release_crq_queue(struct ibmvfc_host *vhost) { long rc = 0; struct vio_dev *vdev = to_vio_dev(vhost->dev); - struct ibmvfc_crq_queue *crq = &vhost->crq; + struct ibmvfc_queue *crq = &vhost->crq; ibmvfc_dbg(vhost, "Releasing CRQ\n"); free_irq(vdev->irq, vhost); @@ -737,8 +754,8 @@ static void ibmvfc_release_crq_queue(struct ibmvfc_host *vhost) vhost->state = IBMVFC_NO_CRQ; vhost->logged_in = 0; - dma_unmap_single(vhost->dev, crq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL); - free_page((unsigned long)crq->msgs); + + ibmvfc_free_queue(vhost, crq); } /** @@ -778,7 +795,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost) int rc = 0; unsigned long flags; struct vio_dev *vdev = to_vio_dev(vhost->dev); - struct ibmvfc_crq_queue *crq = &vhost->crq; + struct ibmvfc_queue *crq = &vhost->crq; /* Close the CRQ */ do { @@ -792,7 +809,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost) vhost->logged_in = 0; /* Clean out the queue */ - memset(crq->msgs, 0, PAGE_SIZE); + memset(crq->msgs.crq, 0, PAGE_SIZE); crq->cur = 0; /* And re-open it again */ @@ -1238,6 +1255,7 @@ static void ibmvfc_gather_partition_info(struct ibmvfc_host *vhost) static void ibmvfc_set_login_info(struct ibmvfc_host *vhost) { struct ibmvfc_npiv_login *login_info = &vhost->login_info; + struct ibmvfc_queue *async_crq = &vhost->async_crq; struct device_node *of_node = vhost->dev->of_node; const char *location; @@ -1257,7 +1275,8 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost) login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ); login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE | IBMVFC_CAN_SEND_VF_WWPN); login_info->async.va = cpu_to_be64(vhost->async_crq.msg_token); - login_info->async.len = cpu_to_be32(vhost->async_crq.size * sizeof(*vhost->async_crq.msgs)); + login_info->async.len = cpu_to_be32(async_crq->size * + sizeof(*async_crq->msgs.async)); strncpy(login_info->partition_name, vhost->partition_name, IBMVFC_MAX_NAME); strncpy(login_info->device_name, dev_name(&vhost->host->shost_gendev), IBMVFC_MAX_NAME); @@ -3230,10 +3249,10 @@ static struct scsi_host_template driver_template = { **/ static struct ibmvfc_async_crq *ibmvfc_next_async_crq(struct ibmvfc_host *vhost) { - struct ibmvfc_async_crq_queue *async_crq = &vhost->async_crq; + struct ibmvfc_queue *async_crq = &vhost->async_crq; struct ibmvfc_async_crq *crq; - crq = &async_crq->msgs[async_crq->cur]; + crq = &async_crq->msgs.async[async_crq->cur]; if (crq->valid & 0x80) { if (++async_crq->cur == async_crq->size) async_crq->cur = 0; @@ -3253,10 +3272,10 @@ static struct ibmvfc_async_crq *ibmvfc_next_async_crq(struct ibmvfc_host *vhost) **/ static struct ibmvfc_crq *ibmvfc_next_crq(struct ibmvfc_host *vhost) { - struct ibmvfc_crq_queue *queue = &vhost->crq; + struct ibmvfc_queue *queue = &vhost->crq; struct ibmvfc_crq *crq; - crq = &queue->msgs[queue->cur]; + crq = &queue->msgs.crq[queue->cur]; if (crq->valid & 0x80) { if (++queue->cur == queue->size) queue->cur = 0; @@ -4895,6 +4914,54 @@ static int ibmvfc_work(void *data) return 0; } +/** + * ibmvfc_alloc_queue - Allocate queue + * @vhost: ibmvfc host struct + * @queue: ibmvfc queue to allocate + * @fmt: queue format to allocate + * + * Returns: + * 0 on success / non-zero on failure + **/ +static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost, + struct ibmvfc_queue *queue, + enum ibmvfc_msg_fmt fmt) +{ + struct device *dev = vhost->dev; + size_t fmt_size; + + ENTER; + switch (fmt) { + case IBMVFC_CRQ_FMT: + fmt_size = sizeof(*queue->msgs.crq); + break; + case IBMVFC_ASYNC_FMT: + fmt_size = sizeof(*queue->msgs.async); + break; + default: + dev_warn(dev, "Unknown command/response queue message format: %d\n", fmt); + return -EINVAL; + } + + queue->msgs.handle = (void *)get_zeroed_page(GFP_KERNEL); + if (!queue->msgs.handle) + return -ENOMEM; + + queue->msg_token = dma_map_single(dev, queue->msgs.handle, PAGE_SIZE, + DMA_BIDIRECTIONAL); + + if (dma_mapping_error(dev, queue->msg_token)) { + free_page((unsigned long)queue->msgs.handle); + queue->msgs.handle = NULL; + return -ENOMEM; + } + + queue->cur = 0; + queue->fmt = fmt; + queue->size = PAGE_SIZE / fmt_size; + return 0; +} + /** * ibmvfc_init_crq - Initializes and registers CRQ with hypervisor * @vhost: ibmvfc host struct @@ -4910,21 +4977,12 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost) int rc, retrc = -ENOMEM; struct device *dev = vhost->dev; struct vio_dev *vdev = to_vio_dev(dev); - struct ibmvfc_crq_queue *crq = &vhost->crq; + struct ibmvfc_queue *crq = &vhost->crq; ENTER; - crq->msgs = (struct ibmvfc_crq *)get_zeroed_page(GFP_KERNEL); - - if (!crq->msgs) + if (ibmvfc_alloc_queue(vhost, crq, IBMVFC_CRQ_FMT)) return -ENOMEM; - crq->size = PAGE_SIZE / sizeof(*crq->msgs); - crq->msg_token = dma_map_single(dev, crq->msgs, - PAGE_SIZE, DMA_BIDIRECTIONAL); - - if (dma_mapping_error(dev, crq->msg_token)) - goto map_failed; - retrc = rc = plpar_hcall_norets(H_REG_CRQ, vdev->unit_address, crq->msg_token, PAGE_SIZE); @@ -4953,7 +5011,6 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost) goto req_irq_failed; } - crq->cur = 0; LEAVE; return retrc; @@ -4963,9 +5020,7 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost) rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); reg_crq_failed: - dma_unmap_single(dev, crq->msg_token, PAGE_SIZE, DMA_BIDIRECTIONAL); -map_failed: - free_page((unsigned long)crq->msgs); + ibmvfc_free_queue(vhost, crq); return retrc; } @@ -4978,7 +5033,7 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost) **/ static void ibmvfc_free_mem(struct ibmvfc_host *vhost) { - struct ibmvfc_async_crq_queue *async_q = &vhost->async_crq; + struct ibmvfc_queue *async_q = &vhost->async_crq; ENTER; mempool_destroy(vhost->tgt_pool); @@ -4988,9 +5043,7 @@ static void ibmvfc_free_mem(struct ibmvfc_host *vhost) dma_free_coherent(vhost->dev, sizeof(*vhost->login_buf), vhost->login_buf, vhost->login_buf_dma); dma_pool_destroy(vhost->sg_pool); - dma_unmap_single(vhost->dev, async_q->msg_token, - async_q->size * sizeof(*async_q->msgs), DMA_BIDIRECTIONAL); - free_page((unsigned long)async_q->msgs); + ibmvfc_free_queue(vhost, async_q); LEAVE; } @@ -5003,26 +5056,15 @@ static void ibmvfc_free_mem(struct ibmvfc_host *vhost) **/ static int ibmvfc_alloc_mem(struct ibmvfc_host *vhost) { - struct ibmvfc_async_crq_queue *async_q = &vhost->async_crq; + struct ibmvfc_queue *async_q = &vhost->async_crq; struct device *dev = vhost->dev; ENTER; - async_q->msgs = (struct ibmvfc_async_crq *)get_zeroed_page(GFP_KERNEL); - if (!async_q->msgs) { - dev_err(dev, "Couldn't allocate async queue.\n"); + if (ibmvfc_alloc_queue(vhost, async_q, IBMVFC_ASYNC_FMT)) { + dev_err(dev, "Couldn't allocate/map async queue.\n"); goto nomem; } - async_q->size = PAGE_SIZE / sizeof(struct ibmvfc_async_crq); - async_q->msg_token = dma_map_single(dev, async_q->msgs, - async_q->size * sizeof(*async_q->msgs), - DMA_BIDIRECTIONAL); - - if (dma_mapping_error(dev, async_q->msg_token)) { - dev_err(dev, "Failed to map async queue\n"); - goto free_async_crq; - } - vhost->sg_pool = dma_pool_create(IBMVFC_NAME, dev, SG_ALL * sizeof(struct srp_direct_buf), sizeof(struct srp_direct_buf), 0); @@ -5077,10 +5119,7 @@ static int ibmvfc_alloc_mem(struct ibmvfc_host *vhost) free_sg_pool: dma_pool_destroy(vhost->sg_pool); unmap_async_crq: - dma_unmap_single(dev, async_q->msg_token, - async_q->size * sizeof(*async_q->msgs), DMA_BIDIRECTIONAL); -free_async_crq: - free_page((unsigned long)async_q->msgs); + ibmvfc_free_queue(vhost, async_q); nomem: LEAVE; return -ENOMEM; diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index 9d58cfd774d3..5bf1621223d6 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -645,12 +645,6 @@ struct ibmvfc_crq { volatile __be64 ioba; } __packed __aligned(8); -struct ibmvfc_crq_queue { - struct ibmvfc_crq *msgs; - int size, cur; - dma_addr_t msg_token; -}; - enum ibmvfc_ae_link_state { IBMVFC_AE_LS_LINK_UP = 0x01, IBMVFC_AE_LS_LINK_BOUNCED = 0x02, @@ -678,12 +672,6 @@ struct ibmvfc_async_crq { __be64 reserved; } __packed __aligned(8); -struct ibmvfc_async_crq_queue { - struct ibmvfc_async_crq *msgs; - int size, cur; - dma_addr_t msg_token; -}; - union ibmvfc_iu { struct ibmvfc_mad_common mad_common; struct ibmvfc_npiv_login_mad npiv_login; @@ -763,6 +751,24 @@ struct ibmvfc_event_pool { dma_addr_t iu_token; }; +enum ibmvfc_msg_fmt { + IBMVFC_CRQ_FMT = 0, + IBMVFC_ASYNC_FMT, +}; + +union ibmvfc_msgs { + void *handle; + struct ibmvfc_crq *crq; + struct ibmvfc_async_crq *async; +}; + +struct ibmvfc_queue { + union ibmvfc_msgs msgs; + dma_addr_t msg_token; + enum ibmvfc_msg_fmt fmt; + int size, cur; +}; + enum ibmvfc_host_action { IBMVFC_HOST_ACTION_NONE = 0, IBMVFC_HOST_ACTION_RESET, @@ -808,8 +814,8 @@ struct ibmvfc_host { struct ibmvfc_event_pool pool; struct dma_pool *sg_pool; mempool_t *tgt_pool; - struct ibmvfc_crq_queue crq; - struct ibmvfc_async_crq_queue async_crq; + struct ibmvfc_queue crq; + struct ibmvfc_queue async_crq; struct ibmvfc_npiv_login login_info; union ibmvfc_npiv_login_data *login_buf; dma_addr_t login_buf_dma; -- 2.27.0