Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp1237242iof; Tue, 7 Jun 2022 01:10:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzgq5dWUl3ctKajZaS1AC7ssqG3cprYRzG7YD22QL3G3fUG+XoNQbA2k7xKtQRPCwxKAFp6 X-Received: by 2002:a17:907:7f2a:b0:711:dbde:19a1 with SMTP id qf42-20020a1709077f2a00b00711dbde19a1mr4221056ejc.87.1654589452666; Tue, 07 Jun 2022 01:10:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654589452; cv=none; d=google.com; s=arc-20160816; b=haKXOdUJEcfC0gnGF4/CWWOOx5XIJuJXJSmNT5cwEZjeQZQLDBrKUrSa2nY1viuqWK hEw//JSORSiqOQ6k4DVZrTxU0fWAuWPQOAP0IE2P6sKE+rmkEvcZ3S7PsTn5tTkruAT6 mvOFKYgtpx07agfYZcF3vkMlQJd3WOjO2djYE442Zx7Y27e5og+yTZmCFrERpahpbXHg P2m00knpqEEoPnBfBJYGn0EtH4tdn9zXJWpLMcAQ+jvny4L7OEFvPv8L/ucwTs+uKC8Q wdT0v0ivVNeJ3gcHw5QMS7VhrB2B0b3tB5+JTPZcvsB+U8fSxPQ7dw+RDuRHDUb+LLWR 9OYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=nxNhY1ogChgWff6fIIqwLmCdCyq0zTqIPugU4LdV1GE=; b=LzipFQNiExrmGBA+qy8ahPcGQskfauV70DdONzpvQXOGr3AMh1vpexCEclCyBks8JB qUnITVzClzRS9LEWY4ZAEG4Hfi9QFqWWf5/2s8i4aq3TbQKodbJAknkpA7d//SrDWY9U MGhpxTgxGNRrH+qOyLeO1IdlCMJcy8XM562eIVUU0ivLb+S48iXj3q9jHhiecB6VPwfC prKjCUeKqldNhorhNg4Oc/uRCMRcBwp9sxM2b/RvmhcHWagEkAVGt9XJEy9rhG5Ic05M zrSOWQaiwv9imydzEw2ffmf2yyzKQQjJtc0U9d8LVi7BUATIodbb7NJsyxL17lga2lbP jUPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cs13-20020a170906dc8d00b006fec66f14f9si9728266ejc.552.2022.06.07.01.10.17; Tue, 07 Jun 2022 01:10:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236666AbiFGFAs (ORCPT + 99 others); Tue, 7 Jun 2022 01:00:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236611AbiFGFAo (ORCPT ); Tue, 7 Jun 2022 01:00:44 -0400 Received: from smtp.smtpout.orange.fr (smtp07.smtpout.orange.fr [80.12.242.129]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C669DAFB13 for ; Mon, 6 Jun 2022 22:00:41 -0700 (PDT) Received: from [192.168.1.18] ([90.11.190.129]) by smtp.orange.fr with ESMTPA id yRKRntRriL5fDyRKSnPDgo; Tue, 07 Jun 2022 07:00:39 +0200 X-ME-Helo: [192.168.1.18] X-ME-Auth: YWZlNiIxYWMyZDliZWIzOTcwYTEyYzlhMmU3ZiQ1M2U2MzfzZDfyZTMxZTBkMTYyNDBjNDJlZmQ3ZQ== X-ME-Date: Tue, 07 Jun 2022 07:00:39 +0200 X-ME-IP: 90.11.190.129 Message-ID: Date: Tue, 7 Jun 2022 07:00:31 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH v2 5/5] crypto: aspeed: add HACE crypto driver Content-Language: fr To: Neal Liu Cc: linux-aspeed@lists.ozlabs.org, linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, BMC-SW@aspeedtech.com, Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Joel Stanley , Andrew Jeffery , Johnny Huang References: <20220606064935.1458903-1-neal_liu@aspeedtech.com> <20220606064935.1458903-6-neal_liu@aspeedtech.com> From: Christophe JAILLET In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Le 07/06/2022 à 05:53, Neal Liu a écrit : >> Le 06/06/2022 à 08:49, Neal Liu a écrit : >>> Add HACE crypto driver to support symmetric-key encryption and >>> decryption with multiple modes of operation. >>> >>> Signed-off-by: Neal Liu >>> Signed-off-by: Johnny Huang >>> --- >> >> [...] >> >>> +static int aspeed_sk_transfer_sg(struct aspeed_hace_dev *hace_dev) { >>> + struct aspeed_engine_crypto *crypto_engine = >> &hace_dev->crypto_engine; >>> + struct device *dev = hace_dev->dev; >>> + struct aspeed_cipher_reqctx *rctx; >>> + struct skcipher_request *req; >>> + >>> + CIPHER_DBG(hace_dev, "\n"); >>> + >>> + req = skcipher_request_cast(crypto_engine->areq); >>> + rctx = skcipher_request_ctx(req); >>> + >>> + if (req->src == req->dst) { >>> + dma_unmap_sg(dev, req->src, rctx->src_nents, >> DMA_BIDIRECTIONAL); >>> + >> >> Unneeded empty line. > > Okay ! > >> >>> + } else { >>> + dma_unmap_sg(dev, req->src, rctx->src_nents, DMA_TO_DEVICE); >>> + dma_unmap_sg(dev, req->dst, rctx->dst_nents, >> DMA_FROM_DEVICE); >>> + } >>> + >>> + return aspeed_sk_complete(hace_dev, 0); } >>> + >> >> [...] >> >>> +static int aspeed_sk_start_sg(struct aspeed_hace_dev *hace_dev) { >>> + struct aspeed_engine_crypto *crypto_engine = >> &hace_dev->crypto_engine; >>> + struct aspeed_sg_list *src_list, *dst_list; >>> + dma_addr_t src_dma_addr, dst_dma_addr; >>> + struct aspeed_cipher_reqctx *rctx; >>> + struct skcipher_request *req; >>> + struct scatterlist *s; >>> + int src_sg_len; >>> + int dst_sg_len; >>> + int total, i; >>> + int rc; >>> + >>> + CIPHER_DBG(hace_dev, "\n"); >>> + >>> + req = skcipher_request_cast(crypto_engine->areq); >>> + rctx = skcipher_request_ctx(req); >>> + >>> + rctx->enc_cmd |= HACE_CMD_DES_SG_CTRL | >> HACE_CMD_SRC_SG_CTRL | >>> + HACE_CMD_AES_KEY_HW_EXP | >> HACE_CMD_MBUS_REQ_SYNC_EN; >>> + >>> + /* BIDIRECTIONAL */ >>> + if (req->dst == req->src) { >>> + src_sg_len = dma_map_sg(hace_dev->dev, req->src, >>> + rctx->src_nents, DMA_BIDIRECTIONAL); >>> + dst_sg_len = src_sg_len; >>> + if (!src_sg_len) { >>> + dev_warn(hace_dev->dev, "dma_map_sg() src error\n"); >>> + return -EINVAL; >>> + } >>> + >>> + } else { >>> + src_sg_len = dma_map_sg(hace_dev->dev, req->src, >>> + rctx->src_nents, DMA_TO_DEVICE); >>> + if (!src_sg_len) { >>> + dev_warn(hace_dev->dev, "dma_map_sg() src error\n"); >>> + return -EINVAL; >>> + } >>> + >>> + dst_sg_len = dma_map_sg(hace_dev->dev, req->dst, >>> + rctx->dst_nents, DMA_FROM_DEVICE); >>> + if (!dst_sg_len) { >>> + dev_warn(hace_dev->dev, "dma_map_sg() dst error\n"); >>> + rc = -EINVAL; >>> + goto free_req_src; >> >> Should we realy call dma_unmap_sg() if dma_map_sg() fails? > > This error handling is unmap() the above buffer (req->src), not really this buffer (req->dst). > I think it should. You are right, I missread it. Sorry for the noise. > >> >>> + } >>> + } >>> + >>> + src_list = (struct aspeed_sg_list *)crypto_engine->cipher_addr; >>> + src_dma_addr = crypto_engine->cipher_dma_addr; >>> + total = req->cryptlen; >>> + >>> + for_each_sg(req->src, s, src_sg_len, i) { >>> + src_list[i].phy_addr = sg_dma_address(s); >>> + >>> + /* last sg list */ >>> + if (sg_dma_len(s) >= total) { >>> + src_list[i].len = total; >>> + src_list[i].len |= BIT(31); >>> + total = 0; >>> + break; >>> + } >>> + >>> + src_list[i].len = sg_dma_len(s); >>> + total -= src_list[i].len; >>> + } >>> + >>> + if (total != 0) >>> + return -EINVAL; >> >> goto free_req_src; ? > > Yes, I miss this part. I'll revise it in next patch, thanks. There is another one below... > >> >>> + >>> + if (req->dst == req->src) { >>> + dst_list = src_list; >>> + dst_dma_addr = src_dma_addr; >>> + >>> + } else { >>> + dst_list = (struct aspeed_sg_list *)crypto_engine->dst_sg_addr; >>> + dst_dma_addr = crypto_engine->dst_sg_dma_addr; >>> + total = req->cryptlen; >>> + >>> + for_each_sg(req->dst, s, dst_sg_len, i) { >>> + dst_list[i].phy_addr = sg_dma_address(s); >>> + >>> + /* last sg list */ >>> + if (sg_dma_len(s) >= total) { >>> + dst_list[i].len = total; >>> + dst_list[i].len |= BIT(31); >>> + total = 0; >>> + break; >>> + } >>> + >>> + dst_list[i].len = sg_dma_len(s); >>> + total -= dst_list[i].len; >>> + } >>> + >>> + dst_list[dst_sg_len].phy_addr = 0; >>> + dst_list[dst_sg_len].len = 0; >>> + } >>> + >>> + if (total != 0) >>> + return -EINVAL; ... here. >>> + >>> + crypto_engine->resume = aspeed_sk_transfer_sg; >>> + >>> + /* Dummy read for barriers */ >>> + readl(src_list); >>> + readl(dst_list); >>> + >>> + /* Trigger engines */ >>> + ast_hace_write(hace_dev, src_dma_addr, ASPEED_HACE_SRC); >>> + ast_hace_write(hace_dev, dst_dma_addr, ASPEED_HACE_DEST); >>> + ast_hace_write(hace_dev, req->cryptlen, ASPEED_HACE_DATA_LEN); >>> + ast_hace_write(hace_dev, rctx->enc_cmd, ASPEED_HACE_CMD); >>> + >>> + return -EINPROGRESS; >>> + >>> +free_req_src: >>> + dma_unmap_sg(hace_dev->dev, req->src, rctx->src_nents, >>> +DMA_TO_DEVICE); >>> + >>> + return rc; >>> +} >>> + >> >> [...] >> >>> +static int aspeed_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, >>> + unsigned int keylen) >>> +{ >>> + struct aspeed_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); >>> + struct aspeed_hace_dev *hace_dev = ctx->hace_dev; >>> + struct crypto_aes_ctx gen_aes_key; >>> + >>> + CIPHER_DBG(hace_dev, "keylen: %d bits\n", (keylen * 8)); >>> + >>> + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && >>> + keylen != AES_KEYSIZE_256) >>> + return -EINVAL; >>> + >>> + if (ctx->hace_dev->version == AST2500_VERSION) { >>> + aes_expandkey(&gen_aes_key, key, keylen); >>> + memcpy(ctx->key, gen_aes_key.key_enc, AES_MAX_KEYLENGTH); >>> + >> >> Unneeded empty line > > Okay ! > >> >>> + } else { >>> + memcpy(ctx->key, key, keylen); >>> + } >>> + >>> + ctx->key_len = keylen; >>> + >>> + return 0; >>> +} >>> + >> >> [...] >> >>> + crypto_engine->cipher_ctx = >>> + dma_alloc_coherent(&pdev->dev, >>> + PAGE_SIZE, >>> + &crypto_engine->cipher_ctx_dma, >>> + GFP_KERNEL); >>> + if (!crypto_engine->cipher_ctx) { >>> + dev_err(&pdev->dev, "Failed to allocate cipher ctx dma\n"); >>> + rc = -ENOMEM; >>> + goto free_hash_src; >>> + } >>> + >>> + crypto_engine->cipher_addr = >>> + dma_alloc_coherent(&pdev->dev, >>> + ASPEED_CRYPTO_SRC_DMA_BUF_LEN, >>> + &crypto_engine->cipher_dma_addr, >>> + GFP_KERNEL); >>> + if (!crypto_engine->cipher_addr) { >>> + dev_err(&pdev->dev, "Failed to allocate cipher addr dma\n"); >>> + rc = -ENOMEM; >>> + goto free_cipher_ctx; >>> + } >>> + >>> + if (hace_dev->version == AST2600_VERSION) { >>> + crypto_engine->dst_sg_addr = >>> + dma_alloc_coherent(&pdev->dev, >>> + ASPEED_CRYPTO_DST_DMA_BUF_LEN, >>> + &crypto_engine->dst_sg_dma_addr, >>> + GFP_KERNEL); >>> + if (!crypto_engine->dst_sg_addr) { >>> + dev_err(&pdev->dev, "Failed to allocate dst_sg dma\n"); >>> + rc = -ENOMEM; >>> + goto free_cipher_addr; >>> + } >>> + } >>> + >>> rc = aspeed_hace_register(hace_dev); >>> if (rc) { >>> dev_err(&pdev->dev, "Failed to register algs, rc:0x%x\n", rc); >> >> I guess that the new dma_alloc_coherent() just a few lines above should also >> be undone in error hanfling path if aspeed_hace_register() fails? > > I'll remove the return value (rc) since it's useless here. So no need error handling on this part. > I'll revise it in next patch, thanks. > >> >>> @@ -179,6 +282,18 @@ static int aspeed_hace_probe(struct >>> platform_device *pdev) >>> >>> return 0; >>> >>> +free_cipher_addr: >>> + dma_free_coherent(&pdev->dev, ASPEED_CRYPTO_SRC_DMA_BUF_LEN, >>> + crypto_engine->cipher_addr, >>> + crypto_engine->cipher_dma_addr); >>> +free_cipher_ctx: >>> + dma_free_coherent(&pdev->dev, PAGE_SIZE, >>> + crypto_engine->cipher_ctx, >>> + crypto_engine->cipher_ctx_dma); >>> +free_hash_src: >>> + dma_free_coherent(&pdev->dev, ASPEED_HASH_SRC_DMA_BUF_LEN, >>> + hash_engine->ahash_src_addr, >>> + hash_engine->ahash_src_dma_addr); >>> end: >>> clk_disable_unprepare(hace_dev->clk); >>> return rc;