Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp342695pxa; Tue, 4 Aug 2020 07:02:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwJc/sTSf/fh8GzU2WjacfS3i9+V53Kiw4GeJvIvXGBTOvsVA+g8ME1NK1eGg2a8NPkinG6 X-Received: by 2002:a50:fc82:: with SMTP id f2mr16373262edq.53.1596549760526; Tue, 04 Aug 2020 07:02:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596549760; cv=none; d=google.com; s=arc-20160816; b=IUu3B0S7+spG/Ak15wYK0SBOq2tzlYk+uXW9Z5o5s6hH4sUyLvKoVTfDRQ22WsXpi9 C39GIF36iG9ZrKKUhWphD46+m2RBsylx7iDTaArRzAV8kwqw6gZfg/iqV0UFNrfS9jKb r9GCCytN8YWo+PA3zIVu+mlY13/5Yf8vhPaZ+lsuNzthRJK3YVF8KVjjRX1p8DKMGO/X 1QA7soykzvqD4hDSSUBZZ3MNOsHsjPJO3aiO5mQBBQLIQR04LQKqDF5AXiDCpzsJH5He KYFDpdj+klflC5MXgphddze9CXtgVyNGm4nAqYIUhLALf6fSkMN1FaBlB3aPmrCbn8N7 E1wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=Bt768DruM1D4MKGzaugySLaZJetHVf1vZRDwMoQYN9s=; b=tmcEzjR144+4MFPdaOrvleXJV4YBaKszQ+TX5ztFqSzxDLUjG3fo+f+U0ffI3Y9M2A 0G6RDFBr/ts8tW8lcuvsLDoMqSFdBmtjLozk75McrgR9S8F1VpTIuynwAWz8wvv1qax0 gYpARfutH0VlHcAn1NaNGUr3NUg+U8oHPrOlLBEpHw7jC90O+scXA7MFIvOwx7kdgTZm EZjVRsQKgra/3l6BysMiy3ra0/rr4Xd1ZsoVEDSz6Af+/Vsfugs2cYyjqpL45Fi6jEGG gcFA6rLWXGQK4oLs8IPfnymK1y2gDmKtKBXq3aZtzpef38ME8rqwRvBGq6BLAlNDmMf9 5lbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c101si12651988edf.277.2020.08.04.07.02.12; Tue, 04 Aug 2020 07:02:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728813AbgHDOAy (ORCPT + 99 others); Tue, 4 Aug 2020 10:00:54 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:60684 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728739AbgHDOAl (ORCPT ); Tue, 4 Aug 2020 10:00:41 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 9ABB6F5BE10505B44045; Tue, 4 Aug 2020 22:00:38 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 4 Aug 2020 22:00:31 +0800 From: Yang Shen To: , CC: , , , Subject: [PATCH v4 05/10] crypto: hisilicon/qm - fix event queue depth to 2048 Date: Tue, 4 Aug 2020 21:58:25 +0800 Message-ID: <1596549510-2373-6-git-send-email-shenyang39@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1596549510-2373-1-git-send-email-shenyang39@huawei.com> References: <1596549510-2373-1-git-send-email-shenyang39@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Shukun Tan Increasing depth of 'event queue' from 1024 to 2048, which equals to twice depth of 'completion queue'. It will fix the easily happened 'event queue overflow' as using 1024 queue depth for 'event queue'. Fixes: 263c9959c937("crypto: hisilicon - add queue management driver...") Signed-off-by: Shukun Tan Signed-off-by: Yang Shen Reviewed-by: Zhou Wang --- drivers/crypto/hisilicon/qm.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index 9a5a114..0f2a48a 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -181,6 +181,7 @@ #define QM_PCI_COMMAND_INVALID ~0 #define QM_SQE_ADDR_MASK GENMASK(7, 0) +#define QM_EQ_DEPTH (1024 * 2) #define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \ (((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \ @@ -652,7 +653,7 @@ static void qm_work_process(struct work_struct *work) qp = qm_to_hisi_qp(qm, eqe); qm_poll_qp(qp, qm); - if (qm->status.eq_head == QM_Q_DEPTH - 1) { + if (qm->status.eq_head == QM_EQ_DEPTH - 1) { qm->status.eqc_phase = !qm->status.eqc_phase; eqe = qm->eqe; qm->status.eq_head = 0; @@ -661,7 +662,7 @@ static void qm_work_process(struct work_struct *work) qm->status.eq_head++; } - if (eqe_num == QM_Q_DEPTH / 2 - 1) { + if (eqe_num == QM_EQ_DEPTH / 2 - 1) { eqe_num = 0; qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0); } @@ -1371,7 +1372,13 @@ static int qm_eq_aeq_dump(struct hisi_qm *qm, const char *s, return -EINVAL; ret = kstrtou32(s, 0, &xeqe_id); - if (ret || xeqe_id >= QM_Q_DEPTH) { + if (ret) + return -EINVAL; + + if (!strcmp(name, "EQE") && xeqe_id >= QM_EQ_DEPTH) { + dev_err(dev, "Please input eqe num (0-%d)", QM_EQ_DEPTH - 1); + return -EINVAL; + } else if (!strcmp(name, "AEQE") && xeqe_id >= QM_Q_DEPTH) { dev_err(dev, "Please input aeqe num (0-%d)", QM_Q_DEPTH - 1); return -EINVAL; } @@ -2284,7 +2291,7 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) } while (0) idr_init(&qm->qp_idr); - qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_Q_DEPTH) + + qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_EQ_DEPTH) + QMC_ALIGN(sizeof(struct qm_aeqe) * QM_Q_DEPTH) + QMC_ALIGN(sizeof(struct qm_sqc) * qm->qp_num) + QMC_ALIGN(sizeof(struct qm_cqc) * qm->qp_num); @@ -2294,7 +2301,7 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) if (!qm->qdma.va) return -ENOMEM; - QM_INIT_BUF(qm, eqe, QM_Q_DEPTH); + QM_INIT_BUF(qm, eqe, QM_EQ_DEPTH); QM_INIT_BUF(qm, aeqe, QM_Q_DEPTH); QM_INIT_BUF(qm, sqc, qm->qp_num); QM_INIT_BUF(qm, cqc, qm->qp_num); @@ -2464,7 +2471,7 @@ static int qm_eq_ctx_cfg(struct hisi_qm *qm) eqc->base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma)); if (qm->ver == QM_HW_V1) eqc->dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE); - eqc->dw6 = cpu_to_le32((QM_Q_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); + eqc->dw6 = cpu_to_le32((QM_EQ_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); ret = qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0); dma_unmap_single(dev, eqc_dma, sizeof(struct qm_eqc), DMA_TO_DEVICE); kfree(eqc); -- 2.7.4