Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp5873385ioo; Wed, 1 Jun 2022 14:38:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx6BBUDmV+XABc6dwCEY4mAiMAkXEcgB2C62n9iGjkfMQ4F+03sNblgSgokKV0J6JEZW4Hs X-Received: by 2002:a63:e315:0:b0:3fa:8780:9a1b with SMTP id f21-20020a63e315000000b003fa87809a1bmr1208072pgh.306.1654119482276; Wed, 01 Jun 2022 14:38:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654119482; cv=none; d=google.com; s=arc-20160816; b=QA/7q1MMf6QrO/i5hrmzL46LTq1f79baBD4x4oIyapJlC4/rnJz4nxKcScANIrq1sm 4d1Z23PwvohJPi1QG/20aVXBADig6u5+/Eu+HMqkkwfUrYdNbgegjkiDwCMpnLn2oobr OaAvosXcsiTUCpCE/qpP6zk9fDaSpwhe1wO//RPvze7E51lfy0LNX7cZlB2RfZyEslKQ u1W3H6RZ8uFn3if+uSekh2NuX2Khc0G4WU0xecIxcYGBDY0KS2zr468dONrrGKYfGpeJ XVNkOkmWAsWC5SE+sZZvkbUQFf0bnXJBqmQ1p7HZUeVmiyXAYTfZQlXATLHpEwfjSRb7 s5Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=Ya0caDTdujdBXRd7W8osfQZf9qv5MgSv71JzYOcW6O8=; b=V3m9E4sfE7NTmAvOLy6iVYUX2cNqgh6/FFAbGqqomkpN5iIANgt3Cr+tBSK6/j4O7T 3sXTZwOLL+5K89cSTyV5D7qDLrbYC3cQFy1gLiu5OR04QpPv7ucOkudLw0zXT6tpcKow njGFiuC63FNJ05ZfOo/P4X+l+hLXTY3m/+fe5rV30NSEbZ9xmGKxm2PVQ1x/ZDMTozk8 +2UKvCG6Pn0eOkr0VB//t6sGbgIvkCCU2alauCNdBQA9gpg8QntEah4bJJRCockOZwSy KbGq5hbUZpe7EI43pK1pP5K4P3szK9JntVuswziEY0X1kcJfIyTB5khRxtfotxEUMy6q WAzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id 1-20020a170902c24100b001637d6bac7csi3670702plg.93.2022.06.01.14.38.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jun 2022 14:38:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ABC10280F3F; Wed, 1 Jun 2022 13:45:35 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230108AbiFAUpX (ORCPT + 99 others); Wed, 1 Jun 2022 16:45:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229696AbiFAUpN (ORCPT ); Wed, 1 Jun 2022 16:45:13 -0400 Received: from smtp.smtpout.orange.fr (smtp06.smtpout.orange.fr [80.12.242.128]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF419258DFF for ; Wed, 1 Jun 2022 13:30:12 -0700 (PDT) Received: from [192.168.1.18] ([90.11.190.129]) by smtp.orange.fr with ESMTPA id wUyenfSGZP8ApwUyeno3EU; Wed, 01 Jun 2022 22:30:10 +0200 X-ME-Helo: [192.168.1.18] X-ME-Auth: YWZlNiIxYWMyZDliZWIzOTcwYTEyYzlhMmU3ZiQ1M2U2MzfzZDfyZTMxZTBkMTYyNDBjNDJlZmQ3ZQ== X-ME-Date: Wed, 01 Jun 2022 22:30:10 +0200 X-ME-IP: 90.11.190.129 Message-ID: <56f0c3cb-c17f-6bc1-f621-09e0c2e5e62f@wanadoo.fr> Date: Wed, 1 Jun 2022 22:30:00 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH 5/5] crypto: aspeed: add HACE crypto driver Content-Language: fr To: Neal Liu , Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Joel Stanley , Andrew Jeffery , Johnny Huang Cc: linux-aspeed@lists.ozlabs.org, linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20220601054204.1522976-1-neal_liu@aspeedtech.com> <20220601054204.1522976-6-neal_liu@aspeedtech.com> From: Christophe JAILLET In-Reply-To: <20220601054204.1522976-6-neal_liu@aspeedtech.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Le 01/06/2022 à 07:42, Neal Liu a écrit : > Add HACE crypto driver to support symmetric-key > encryption and decryption with multiple modes of > operation. > > Signed-off-by: Neal Liu > Signed-off-by: Johnny Huang > --- > drivers/crypto/aspeed/Kconfig | 16 + > drivers/crypto/aspeed/Makefile | 2 + > drivers/crypto/aspeed/aspeed-hace-crypto.c | 1019 ++++++++++++++++++++ > drivers/crypto/aspeed/aspeed-hace.c | 101 +- > drivers/crypto/aspeed/aspeed-hace.h | 107 ++ > 5 files changed, 1242 insertions(+), 3 deletions(-) > create mode 100644 drivers/crypto/aspeed/aspeed-hace-crypto.c > > diff --git a/drivers/crypto/aspeed/Kconfig b/drivers/crypto/aspeed/Kconfig > index 17b800286a51..5e4d18288bf1 100644 [...] > +int aspeed_register_hace_crypto_algs(struct aspeed_hace_dev *hace_dev) > +{ > + int rc, i; > + > + for (i = 0; i < ARRAY_SIZE(aspeed_crypto_algs); i++) { > + aspeed_crypto_algs[i].hace_dev = hace_dev; > + rc = crypto_register_skcipher(&aspeed_crypto_algs[i].alg.skcipher); > + if (rc) > + return rc; > + } > + > + if (hace_dev->version == AST2600_VERSION) { > + for (i = 0; i < ARRAY_SIZE(aspeed_crypto_algs_g6); i++) { > + aspeed_crypto_algs_g6[i].hace_dev = hace_dev; > + rc = crypto_register_skcipher( > + &aspeed_crypto_algs_g6[i].alg.skcipher); > + if (rc) > + return rc; > + } > + } Should there be some kind of error handling here, in order to undo things already done if an error occures? > + > + return 0; > +} > diff --git a/drivers/crypto/aspeed/aspeed-hace.c b/drivers/crypto/aspeed/aspeed-hace.c > index f25b13d120e8..f7f90c230843 100644 > --- a/drivers/crypto/aspeed/aspeed-hace.c > +++ b/drivers/crypto/aspeed/aspeed-hace.c > @@ -40,10 +40,30 @@ void __weak aspeed_unregister_hace_hash_algs(struct aspeed_hace_dev *hace_dev) > pr_warn("%s: Not supported yet\n", __func__); > } > > +/* Weak function for HACE crypto */ > +int __weak aspeed_hace_crypto_handle_queue(struct aspeed_hace_dev *hace_dev, > + struct crypto_async_request *new_areq) > +{ > + pr_warn("%s: Not supported yet\n", __func__); > + return -EINVAL; > +} > + > +int __weak aspeed_register_hace_crypto_algs(struct aspeed_hace_dev *hace_dev) > +{ > + pr_warn("%s: Not supported yet\n", __func__); > + return -EINVAL; > +} > + > +void __weak aspeed_unregister_hace_crypto_algs(struct aspeed_hace_dev *hace_dev) > +{ > + pr_warn("%s: Not supported yet\n", __func__); > +} > + > /* HACE interrupt service routine */ > static irqreturn_t aspeed_hace_irq(int irq, void *dev) > { > struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)dev; > + struct aspeed_engine_crypto *crypto_engine = &hace_dev->crypto_engine; > struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine; > u32 sts; > > @@ -56,12 +76,36 @@ static irqreturn_t aspeed_hace_irq(int irq, void *dev) > if (hash_engine->flags & CRYPTO_FLAGS_BUSY) > tasklet_schedule(&hash_engine->done_task); > else > - dev_warn(hace_dev->dev, "no active requests.\n"); > + dev_warn(hace_dev->dev, "HASH no active requests.\n"); To reduce diff, maybe this could already be part of patch 1/5? > + } > + > + if (sts & HACE_CRYPTO_ISR) { > + if (crypto_engine->flags & CRYPTO_FLAGS_BUSY) > + tasklet_schedule(&crypto_engine->done_task); > + else > + dev_warn(hace_dev->dev, "CRYPTO no active requests.\n"); > } > > return IRQ_HANDLED; > } > > +static void aspeed_hace_cryptro_done_task(unsigned long data) > +{ > + struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)data; > + struct aspeed_engine_crypto *crypto_engine; > + > + crypto_engine = &hace_dev->crypto_engine; > + crypto_engine->is_async = true; > + crypto_engine->resume(hace_dev); > +} > + > +static void aspeed_hace_crypto_queue_task(unsigned long data) > +{ > + struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)data; > + > + aspeed_hace_crypto_handle_queue(hace_dev, NULL); > +} > + > static void aspeed_hace_hash_done_task(unsigned long data) > { > struct aspeed_hace_dev *hace_dev = (struct aspeed_hace_dev *)data; > @@ -79,12 +123,27 @@ static void aspeed_hace_hash_queue_task(unsigned long data) > > static int aspeed_hace_register(struct aspeed_hace_dev *hace_dev) > { > - return aspeed_register_hace_hash_algs(hace_dev); > + int rc1, rc2; > + > + rc1 = aspeed_register_hace_hash_algs(hace_dev); > + if (rc1) { > + HACE_DBG(hace_dev, "Failed to register hash alg, rc:0x%x\n", > + rc1); > + } > + > + rc2 = aspeed_register_hace_crypto_algs(hace_dev); > + if (rc2) { > + HACE_DBG(hace_dev, "Failed to register crypto alg, rc:0x%x\n", > + rc2); > + } > + > + return rc1 + rc2; This looks odd. The error returned could finally be pointless if both rc1 and rc2 are not 0. > } > > static void aspeed_hace_unregister(struct aspeed_hace_dev *hace_dev) > { > aspeed_unregister_hace_hash_algs(hace_dev); > + aspeed_unregister_hace_crypto_algs(hace_dev); > } > > static const struct of_device_id aspeed_hace_of_matches[] = { > @@ -95,6 +154,7 @@ static const struct of_device_id aspeed_hace_of_matches[] = { > > static int aspeed_hace_probe(struct platform_device *pdev) > { > + struct aspeed_engine_crypto *crypto_engine; > const struct of_device_id *hace_dev_id; > struct aspeed_engine_hash *hash_engine; > struct aspeed_hace_dev *hace_dev; > @@ -115,6 +175,7 @@ static int aspeed_hace_probe(struct platform_device *pdev) > hace_dev->dev = &pdev->dev; > hace_dev->version = (unsigned long)hace_dev_id->data; > hash_engine = &hace_dev->hash_engine; > + crypto_engine = &hace_dev->crypto_engine; > > res = platform_get_resource(pdev, IORESOURCE_MEM, 0); > > @@ -127,6 +188,13 @@ static int aspeed_hace_probe(struct platform_device *pdev) > (unsigned long)hace_dev); > crypto_init_queue(&hash_engine->queue, ASPEED_HASH_QUEUE_LENGTH); > > + spin_lock_init(&crypto_engine->lock); > + tasklet_init(&crypto_engine->done_task, aspeed_hace_cryptro_done_task, > + (unsigned long)hace_dev); > + tasklet_init(&crypto_engine->queue_task, aspeed_hace_crypto_queue_task, > + (unsigned long)hace_dev); > + crypto_init_queue(&crypto_engine->queue, ASPEED_HASH_QUEUE_LENGTH); > + > hace_dev->regs = devm_ioremap_resource(&pdev->dev, res); > if (!hace_dev->regs) { > dev_err(&pdev->dev, "Failed to map resources\n"); > @@ -168,9 +236,36 @@ static int aspeed_hace_probe(struct platform_device *pdev) > return -ENOMEM; > } > > + crypto_engine->cipher_ctx = > + dma_alloc_coherent(&pdev->dev, > + PAGE_SIZE, > + &crypto_engine->cipher_ctx_dma, > + GFP_KERNEL); Should all these dma_alloc_coherent() be undone in the error handling path of the probe and in the .rmove function? If applicable, maybe dmam_alloc_coherent() would ease? realeasing of resources? > + crypto_engine->cipher_addr = > + dma_alloc_coherent(&pdev->dev, > + ASPEED_CRYPTO_SRC_DMA_BUF_LEN, > + &crypto_engine->cipher_dma_addr, > + GFP_KERNEL); > + if (!crypto_engine->cipher_ctx || !crypto_engine->cipher_addr) { > + dev_err(&pdev->dev, "Failed to allocate cipher dma\n"); > + return -ENOMEM; > + } > + > + if (hace_dev->version == AST2600_VERSION) { > + crypto_engine->dst_sg_addr = > + dma_alloc_coherent(&pdev->dev, > + ASPEED_CRYPTO_DST_DMA_BUF_LEN, > + &crypto_engine->dst_sg_dma_addr, > + GFP_KERNEL); > + if (!crypto_engine->dst_sg_addr) { > + dev_err(&pdev->dev, "Failed to allocate dst_sg dma\n"); > + return -ENOMEM; > + } > + } > + > rc = aspeed_hace_register(hace_dev); > if (rc) { > - dev_err(&pdev->dev, "Failed to register hash alg, rc:0x%x\n", rc); > + dev_err(&pdev->dev, "Failed to register algs, rc:0x%x\n", rc); To reduce diff, maybe this could already be part of patch 1/5? > rc = 0; > } > [...]