Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1504663pxa; Sun, 16 Aug 2020 00:28:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyGEr4KEaUwmfM1ms9v4Ky2+RhAGAFshuvpYZJpei1rAyuByaKTZ8bfqrH95wNa3ZnCf9m1 X-Received: by 2002:aa7:c915:: with SMTP id b21mr10061724edt.17.1597562918457; Sun, 16 Aug 2020 00:28:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597562918; cv=none; d=google.com; s=arc-20160816; b=Em2yoVPD3BZOgO2Y9l8MXpGPLFXcTQu/6KiuHArCEiWC4MzGszMoDGrrGwsSu2i2Pc wRsefe8R+/tGP7JXNbmdHkhMbBGzoifR9scPOmsfSNw5QqWI9LI1vW3xHodPlLRetuSu qxfnO1z+kgda1fD5IVHJubVAGI3OvZgK0EdLCY1Kc6k26kchXpt4P/1F+tJSH6lYa+6F RKIK+IGXnK90LJwzU4n5H4J4PWfcoWtCz1LFsbWGfb16w+8VOcuexvjf4qUhRvw0eoT8 9ijwmeQRRBv355o0mkO+tGRKmRo4+M+V1Wgry1Oaq9plqjG+ZQr7WbA/Q8VwXgDc/NaD M3QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=pEMEgFJyBhEfBkRZq2u/o5CYs3900GVXnSBfoA+zUH4=; b=Lauph+m6krIhbwKFUnMBn6MNAuSt6PyFy9GEytoedZVx+pEllrTUpy7dbxw7/kN/VO M2EcS6Kzw3KxL9ycJNnKhoz7r0Z1uJMl/dsY4g+6fSccoCfe6r8KFMkZjLIf99tLRCZU zYtIV7sKd7IydhC69eeOt3hC/XfWNY8Nl+H7clvt+DZRPY7UH9j6Xsr5oqcvdkseZMxv 2aYs6DegoZqmdTJeHDW8Bg7B/tI536J10PWj0z6ofDNk9QN/qcUKF3KI6qKpbBiOUKbF Dcw64ilKZuEPKBrXolTo7kWFQ9o8g3L4OxExm/rTabtK679zRI821INwR+v77kKGWZ9b E7Dw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p3si8532753edr.352.2020.08.16.00.28.15; Sun, 16 Aug 2020 00:28:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728176AbgHPBo4 (ORCPT + 99 others); Sat, 15 Aug 2020 21:44:56 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:33798 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728886AbgHPBoP (ORCPT ); Sat, 15 Aug 2020 21:44:15 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 0AA9DAC5A7D41AA91CDD; Sat, 15 Aug 2020 17:58:34 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Sat, 15 Aug 2020 17:58:24 +0800 From: Yang Shen To: , CC: , , , Subject: [PATCH v5 05/10] crypto: hisilicon/qm - fix event queue depth to 2048 Date: Sat, 15 Aug 2020 17:56:12 +0800 Message-ID: <1597485377-2678-6-git-send-email-shenyang39@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1597485377-2678-1-git-send-email-shenyang39@huawei.com> References: <1597485377-2678-1-git-send-email-shenyang39@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Shukun Tan Increasing depth of 'event queue' from 1024 to 2048, which equals to twice depth of 'completion queue'. It will fix the easily happened 'event queue overflow' as using 1024 queue depth for 'event queue'. Fixes: 263c9959c937("crypto: hisilicon - add queue management driver...") Signed-off-by: Shukun Tan Signed-off-by: Yang Shen Reviewed-by: Zhou Wang --- drivers/crypto/hisilicon/qm.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index b9bff96..791a469 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -181,6 +181,7 @@ #define QM_PCI_COMMAND_INVALID ~0 #define QM_SQE_ADDR_MASK GENMASK(7, 0) +#define QM_EQ_DEPTH (1024 * 2) #define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \ (((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \ @@ -652,7 +653,7 @@ static void qm_work_process(struct work_struct *work) qp = qm_to_hisi_qp(qm, eqe); qm_poll_qp(qp, qm); - if (qm->status.eq_head == QM_Q_DEPTH - 1) { + if (qm->status.eq_head == QM_EQ_DEPTH - 1) { qm->status.eqc_phase = !qm->status.eqc_phase; eqe = qm->eqe; qm->status.eq_head = 0; @@ -661,7 +662,7 @@ static void qm_work_process(struct work_struct *work) qm->status.eq_head++; } - if (eqe_num == QM_Q_DEPTH / 2 - 1) { + if (eqe_num == QM_EQ_DEPTH / 2 - 1) { eqe_num = 0; qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0); } @@ -1371,7 +1372,13 @@ static int qm_eq_aeq_dump(struct hisi_qm *qm, const char *s, return -EINVAL; ret = kstrtou32(s, 0, &xeqe_id); - if (ret || xeqe_id >= QM_Q_DEPTH) { + if (ret) + return -EINVAL; + + if (!strcmp(name, "EQE") && xeqe_id >= QM_EQ_DEPTH) { + dev_err(dev, "Please input eqe num (0-%d)", QM_EQ_DEPTH - 1); + return -EINVAL; + } else if (!strcmp(name, "AEQE") && xeqe_id >= QM_Q_DEPTH) { dev_err(dev, "Please input aeqe num (0-%d)", QM_Q_DEPTH - 1); return -EINVAL; } @@ -2285,7 +2292,7 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) } while (0) idr_init(&qm->qp_idr); - qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_Q_DEPTH) + + qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_EQ_DEPTH) + QMC_ALIGN(sizeof(struct qm_aeqe) * QM_Q_DEPTH) + QMC_ALIGN(sizeof(struct qm_sqc) * qm->qp_num) + QMC_ALIGN(sizeof(struct qm_cqc) * qm->qp_num); @@ -2295,7 +2302,7 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) if (!qm->qdma.va) return -ENOMEM; - QM_INIT_BUF(qm, eqe, QM_Q_DEPTH); + QM_INIT_BUF(qm, eqe, QM_EQ_DEPTH); QM_INIT_BUF(qm, aeqe, QM_Q_DEPTH); QM_INIT_BUF(qm, sqc, qm->qp_num); QM_INIT_BUF(qm, cqc, qm->qp_num); @@ -2465,7 +2472,7 @@ static int qm_eq_ctx_cfg(struct hisi_qm *qm) eqc->base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma)); if (qm->ver == QM_HW_V1) eqc->dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE); - eqc->dw6 = cpu_to_le32((QM_Q_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); + eqc->dw6 = cpu_to_le32((QM_EQ_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); ret = qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0); dma_unmap_single(dev, eqc_dma, sizeof(struct qm_eqc), DMA_TO_DEVICE); kfree(eqc); -- 2.7.4