Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp380140ybt; Wed, 1 Jul 2020 00:22:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwUzbwropyLAgkejWyducYmqbmrqCIF1VCzNl9bEqfvQyOj9HESksNTD3pc6nuvGqMvumSO X-Received: by 2002:aa7:dc46:: with SMTP id g6mr23974551edu.194.1593588135419; Wed, 01 Jul 2020 00:22:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593588135; cv=none; d=google.com; s=arc-20160816; b=f88bQTf+NiDt51VsuKtAGt/wtvpcqyF6WU7fp9xAMqVvLSJvtMdK4oXg1JxKHYRhtQ P0sldVdrKYzXSquBVeIejFTTO7fGKpCMtyDfTeHqOhJle4JuOD/ZqV2KzcTMxckVts8d ZJQkwBMUVUJSkr+WJCJbW7EJ0wHuHZRefse5kwo0c5ZtFhrEVvJXpeHDtzgzFEBR5GlT 5+tjd76Z2v0Nb/XKfVfYEUaYCnrRb+J4wSnTrI9sQrD5KnB8USLrc7Bf7TumcQ4KRdAB QrbxVinCvu6IccHv928zc4qLSUuRrl+VgFv8CjgpPt4Ln5sd5Luw6V5aOyLMhjYDuQEH fIHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=j1wc0MGjD7RFAD9diTDqjVwbG3IRy4kLurrhJLFueqY=; b=yRDD/+8LE/q8kYTO2OedxURKGs3aOUOqtLVn1iAKYyWBr5mBBPRAXJ5S/M8G9di637 OnM38HpZA2YZOKxEed+OHg32AQig2VcDhmvFDGHJDgU5dQypiDckWJa+gMuCA6+H+dc6 KCUm082VdZxHM3A+cloEkVZNgraNfyOFGoSK7/l5JDBgEXBFpt6ctT/c2Q6v0PkwQ7AP oX2EZBVIya4s1Nddsaiox7azB4ZEVqlTeW1cIDTExATwjcuOOHWRMFsCzqGtWGsnhvu4 N+x2+4D445CprTtTDq7ZboMH65zsz9j5IikVyLc6WpG6hiPghVlVISPqkx3UDdA78Wey 5njQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i17si3272698edv.60.2020.07.01.00.21.51; Wed, 01 Jul 2020 00:22:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728142AbgGAHVq (ORCPT + 99 others); Wed, 1 Jul 2020 03:21:46 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:57380 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728112AbgGAHVp (ORCPT ); Wed, 1 Jul 2020 03:21:45 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id E311AE57DA6A6BE31F7B; Wed, 1 Jul 2020 15:21:41 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Wed, 1 Jul 2020 15:21:32 +0800 From: Yang Shen To: , CC: , , Subject: [Patch v2 5/9] crypto: hisilicon/qm - fix event queue depth to 2048 Date: Wed, 1 Jul 2020 15:19:51 +0800 Message-ID: <1593587995-7391-6-git-send-email-shenyang39@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1593587995-7391-1-git-send-email-shenyang39@huawei.com> References: <1593587995-7391-1-git-send-email-shenyang39@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Shukun Tan Increasing depth of 'event queue' from 1024 to 2048, which equals to twice depth of 'completion queue'. It will fix the easily happened 'event queue overflow' as using 1024 queue depth for 'event queue'. Fixes: 263c9959c937("crypto: hisilicon - add queue management driver...") Signed-off-by: Shukun Tan Signed-off-by: Yang Shen Reviewed-by: Zhou Wang --- drivers/crypto/hisilicon/qm.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index 93f443c..aebb5b8 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -181,6 +181,7 @@ #define QM_PCI_COMMAND_INVALID ~0 #define QM_SQE_ADDR_MASK GENMASK(7, 0) +#define QM_EQ_DEPTH (1024 * 2) #define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \ (((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \ @@ -652,7 +653,7 @@ static void qm_work_process(struct work_struct *work) qp = qm_to_hisi_qp(qm, eqe); qm_poll_qp(qp, qm); - if (qm->status.eq_head == QM_Q_DEPTH - 1) { + if (qm->status.eq_head == QM_EQ_DEPTH - 1) { qm->status.eqc_phase = !qm->status.eqc_phase; eqe = qm->eqe; qm->status.eq_head = 0; @@ -661,7 +662,7 @@ static void qm_work_process(struct work_struct *work) qm->status.eq_head++; } - if (eqe_num == QM_Q_DEPTH / 2 - 1) { + if (eqe_num == QM_EQ_DEPTH / 2 - 1) { eqe_num = 0; qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0); } @@ -1380,7 +1381,13 @@ static int qm_eq_aeq_dump(struct hisi_qm *qm, const char *s, return -EINVAL; ret = kstrtou32(s, 0, &xeqe_id); - if (ret || xeqe_id >= QM_Q_DEPTH) { + if (ret) + return -EINVAL; + + if (!strcmp(name, "EQE") && xeqe_id >= QM_EQ_DEPTH) { + dev_err(dev, "Please input eqe num (0-%d)", QM_EQ_DEPTH - 1); + return -EINVAL; + } else if (!strcmp(name, "AEQE") && xeqe_id >= QM_Q_DEPTH) { dev_err(dev, "Please input aeqe num (0-%d)", QM_Q_DEPTH - 1); return -EINVAL; } @@ -2289,7 +2296,7 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) } while (0) idr_init(&qm->qp_idr); - qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_Q_DEPTH) + + qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_EQ_DEPTH) + QMC_ALIGN(sizeof(struct qm_aeqe) * QM_Q_DEPTH) + QMC_ALIGN(sizeof(struct qm_sqc) * qm->qp_num) + QMC_ALIGN(sizeof(struct qm_cqc) * qm->qp_num); @@ -2299,7 +2306,7 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) if (!qm->qdma.va) return -ENOMEM; - QM_INIT_BUF(qm, eqe, QM_Q_DEPTH); + QM_INIT_BUF(qm, eqe, QM_EQ_DEPTH); QM_INIT_BUF(qm, aeqe, QM_Q_DEPTH); QM_INIT_BUF(qm, sqc, qm->qp_num); QM_INIT_BUF(qm, cqc, qm->qp_num); @@ -2469,7 +2476,7 @@ static int qm_eq_ctx_cfg(struct hisi_qm *qm) eqc->base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma)); if (qm->ver == QM_HW_V1) eqc->dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE); - eqc->dw6 = cpu_to_le32((QM_Q_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); + eqc->dw6 = cpu_to_le32((QM_EQ_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); ret = qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0); dma_unmap_single(dev, eqc_dma, sizeof(struct qm_eqc), DMA_TO_DEVICE); kfree(eqc); -- 2.7.4