Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4198293pxj; Tue, 11 May 2021 23:30:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyt4B44Fc0wFU+M8oenorB79X+BD1lYGlzSj9atvO1fUU5R1R3PI3H8+RitxsCQBNHv5QYr X-Received: by 2002:a05:6638:f11:: with SMTP id h17mr30008582jas.102.1620801050434; Tue, 11 May 2021 23:30:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620801050; cv=none; d=google.com; s=arc-20160816; b=SIvGm4pg/P1w1Ea783hEEZauaqMriHIVO4YOq/Zhum2R40QNvPd+7fKjC0RJpYUP0i oK/7mlv5OIgIaAj224S+6qCLTdCBhq6IzdL+jgrqQenaaFvKqJ4LguFDs0sBGPGK/tdF 26PbczDEPQs7/cOBqHoHsD/EIjHXV6aF3At3b9mN7fqx0JCPbnfm/hplJOmShA5uGKwJ N2t1xtl38OiUh6+ZNbj+Xigwyu8WdEbQzzGiJXj55/u0dtbgyxaFmsAZpf6LKPiAbX+d DFituNDE6KCD513JlTv4fL5/yhlk0yqSX/N4w/c/bq57rOB7Vl8X3OVIGLbrI9R1luG3 7GsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=8+jmTlvjA8EW4FvZE8La7/9yGAtJWlpVixB0qhhMAeg=; b=zratJU627/x9e/rHyWSSL2UeLERO7vg8WtEKn9a8cvmoLKXTwCkQOSwoxLhtAhTvwe nAkPcawiADizpHNlzU5s6/AsFYjp6XapIslEPKYU1esueh49peHqcDvI4fqZmQtHDo7J NxHArHw/uBC6aoJhbkMDemRdcTzGhn0GM3yAD6qqciBpQS9rot+0FVKz5AYkjesHrTgH go0bEOL6fgbQq7GL+V3WfOfz+prhZoErWzVrTnAXby7o0aKDqoM5VHzUJmpKxcBrHakh JJl7/zIGc7pHcCYo7Iha4IBk5761FNOPDMljuAoB0D+5eOGNmfeZnaYF812lqbNgQva+ LdUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k16si22028650jan.10.2021.05.11.23.30.37; Tue, 11 May 2021 23:30:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230186AbhELGbc (ORCPT + 99 others); Wed, 12 May 2021 02:31:32 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2788 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230134AbhELGb2 (ORCPT ); Wed, 12 May 2021 02:31:28 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fg4ZS46BYzmgKv; Wed, 12 May 2021 14:26:56 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Wed, 12 May 2021 14:30:09 +0800 From: Hui Tang To: , CC: , , , Subject: [PATCH v2 1/8] crypto: hisilicon/hpre - the macro 'HPRE_ADDR' expands Date: Wed, 12 May 2021 14:27:04 +0800 Message-ID: <1620800831-53346-2-git-send-email-tanghui20@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1620800831-53346-1-git-send-email-tanghui20@huawei.com> References: <1620800831-53346-1-git-send-email-tanghui20@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The macro 'HPRE_ADDR' is unnecessary, so expanding it. Signed-off-by: Hui Tang --- drivers/crypto/hisilicon/hpre/hpre_main.c | 59 +++++++++++++++---------------- 1 file changed, 29 insertions(+), 30 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index 046bc96..c914e00 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -69,7 +69,6 @@ #define HPRE_DBGFS_VAL_MAX_LEN 20 #define HPRE_PCI_DEVICE_ID 0xa258 #define HPRE_PCI_VF_DEVICE_ID 0xa259 -#define HPRE_ADDR(qm, offset) ((qm)->io_base + (offset)) #define HPRE_QM_USR_CFG_MASK 0xfffffffe #define HPRE_QM_AXI_CFG_MASK 0xffff #define HPRE_QM_VFG_AX_MASK 0xff @@ -302,10 +301,10 @@ static int hpre_set_cluster(struct hisi_qm *qm) /* clusters initiating */ writel(cluster_core_mask, - HPRE_ADDR(qm, offset + HPRE_CORE_ENB)); - writel(0x1, HPRE_ADDR(qm, offset + HPRE_CORE_INI_CFG)); - ret = readl_relaxed_poll_timeout(HPRE_ADDR(qm, offset + - HPRE_CORE_INI_STATUS), val, + qm->io_base + offset + HPRE_CORE_ENB); + writel(0x1, qm->io_base + offset + HPRE_CORE_INI_CFG); + ret = readl_relaxed_poll_timeout(qm->io_base + offset + + HPRE_CORE_INI_STATUS, val, ((val & cluster_core_mask) == cluster_core_mask), HPRE_REG_RD_INTVRL_US, @@ -329,11 +328,11 @@ static void disable_flr_of_bme(struct hisi_qm *qm) { u32 val; - val = readl(HPRE_ADDR(qm, QM_PEH_AXUSER_CFG)); + val = readl(qm->io_base + QM_PEH_AXUSER_CFG); val &= ~(HPRE_QM_BME_FLR | HPRE_QM_SRIOV_FLR); val |= HPRE_QM_PM_FLR; - writel(val, HPRE_ADDR(qm, QM_PEH_AXUSER_CFG)); - writel(PEH_AXUSER_CFG_ENABLE, HPRE_ADDR(qm, QM_PEH_AXUSER_CFG_ENABLE)); + writel(val, qm->io_base + QM_PEH_AXUSER_CFG); + writel(PEH_AXUSER_CFG_ENABLE, qm->io_base + QM_PEH_AXUSER_CFG_ENABLE); } static int hpre_set_user_domain_and_cache(struct hisi_qm *qm) @@ -342,33 +341,33 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm) u32 val; int ret; - writel(HPRE_QM_USR_CFG_MASK, HPRE_ADDR(qm, QM_ARUSER_M_CFG_ENABLE)); - writel(HPRE_QM_USR_CFG_MASK, HPRE_ADDR(qm, QM_AWUSER_M_CFG_ENABLE)); - writel_relaxed(HPRE_QM_AXI_CFG_MASK, HPRE_ADDR(qm, QM_AXI_M_CFG)); + writel(HPRE_QM_USR_CFG_MASK, qm->io_base + QM_ARUSER_M_CFG_ENABLE); + writel(HPRE_QM_USR_CFG_MASK, qm->io_base + QM_AWUSER_M_CFG_ENABLE); + writel_relaxed(HPRE_QM_AXI_CFG_MASK, qm->io_base + QM_AXI_M_CFG); /* HPRE need more time, we close this interrupt */ - val = readl_relaxed(HPRE_ADDR(qm, HPRE_QM_ABNML_INT_MASK)); + val = readl_relaxed(qm->io_base + HPRE_QM_ABNML_INT_MASK); val |= BIT(HPRE_TIMEOUT_ABNML_BIT); - writel_relaxed(val, HPRE_ADDR(qm, HPRE_QM_ABNML_INT_MASK)); + writel_relaxed(val, qm->io_base + HPRE_QM_ABNML_INT_MASK); if (qm->ver >= QM_HW_V3) writel(HPRE_RSA_ENB | HPRE_ECC_ENB, - HPRE_ADDR(qm, HPRE_TYPES_ENB)); + qm->io_base + HPRE_TYPES_ENB); else - writel(HPRE_RSA_ENB, HPRE_ADDR(qm, HPRE_TYPES_ENB)); - - writel(HPRE_QM_VFG_AX_MASK, HPRE_ADDR(qm, HPRE_VFG_AXCACHE)); - writel(0x0, HPRE_ADDR(qm, HPRE_BD_ENDIAN)); - writel(0x0, HPRE_ADDR(qm, HPRE_INT_MASK)); - writel(0x0, HPRE_ADDR(qm, HPRE_POISON_BYPASS)); - writel(0x0, HPRE_ADDR(qm, HPRE_COMM_CNT_CLR_CE)); - writel(0x0, HPRE_ADDR(qm, HPRE_ECC_BYPASS)); - - writel(HPRE_BD_USR_MASK, HPRE_ADDR(qm, HPRE_BD_ARUSR_CFG)); - writel(HPRE_BD_USR_MASK, HPRE_ADDR(qm, HPRE_BD_AWUSR_CFG)); - writel(0x1, HPRE_ADDR(qm, HPRE_RDCHN_INI_CFG)); - ret = readl_relaxed_poll_timeout(HPRE_ADDR(qm, HPRE_RDCHN_INI_ST), val, - val & BIT(0), + writel(HPRE_RSA_ENB, qm->io_base + HPRE_TYPES_ENB); + + writel(HPRE_QM_VFG_AX_MASK, qm->io_base + HPRE_VFG_AXCACHE); + writel(0x0, qm->io_base + HPRE_BD_ENDIAN); + writel(0x0, qm->io_base + HPRE_INT_MASK); + writel(0x0, qm->io_base + HPRE_POISON_BYPASS); + writel(0x0, qm->io_base + HPRE_COMM_CNT_CLR_CE); + writel(0x0, qm->io_base + HPRE_ECC_BYPASS); + + writel(HPRE_BD_USR_MASK, qm->io_base + HPRE_BD_ARUSR_CFG); + writel(HPRE_BD_USR_MASK, qm->io_base + HPRE_BD_AWUSR_CFG); + writel(0x1, qm->io_base + HPRE_RDCHN_INI_CFG); + ret = readl_relaxed_poll_timeout(qm->io_base + HPRE_RDCHN_INI_ST, val, + val & BIT(0), HPRE_REG_RD_INTVRL_US, HPRE_REG_RD_TMOUT_US); if (ret) { @@ -802,9 +801,9 @@ static void hpre_open_axi_master_ooo(struct hisi_qm *qm) value = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); writel(value & ~HPRE_AM_OOO_SHUTDOWN_ENABLE, - HPRE_ADDR(qm, HPRE_AM_OOO_SHUTDOWN_ENB)); + qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); writel(value | HPRE_AM_OOO_SHUTDOWN_ENABLE, - HPRE_ADDR(qm, HPRE_AM_OOO_SHUTDOWN_ENB)); + qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); } static void hpre_err_info_init(struct hisi_qm *qm) -- 2.8.1