Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp4216830pxv; Mon, 19 Jul 2021 20:49:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzpx0fMaNDG3SvIl+cedmyJNl2ehRGV6XAd7xgrhJ7DCJxO6CvbTE0+tfC9/HFcJ4TLJSjq X-Received: by 2002:a92:d848:: with SMTP id h8mr19691595ilq.282.1626752953311; Mon, 19 Jul 2021 20:49:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626752953; cv=none; d=google.com; s=arc-20160816; b=CckLWAXaqUvt9j+uIiHzFRSkSGgNvuOw63oK1fRCh2XsLWfhgGV+BJmz12Y+vPNyS4 ouBBT4Haku6KYouoGU3WTcBD2ShERa0OZzRSN/giVxnjbGq/FuCI8F2IgGCyAFLW/zhK hSwXcnM0BXr/MPZ5DQVHw8kacgHmYNtz3IetDGZezD8SoJwA4eYx7j6zkmtJZiYsKc/9 DJMDBaFG1wteD6Roqp6fVW0/vCOstBzMCRBaBHqIUJ1EbTc21a3z/BBAodo9ji2uzHTi jwDt8At2D3gGRSUkvWe6HSTUvgSE+tPdXtljmPAQW4NC8EPuSBkbft5jLZYiqr4rNve+ CY/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jNXOYTgmGHXbRYlO0DflTk7ifOn8bUVHB9wQNVpi/9w=; b=g8l9j3FiVnYmMJhce06U55FCCwkMTw2WSKAGaQXPexcc1q/KCc3jImQPDN2gafcIK4 4EfpSrd7xqEC62o1zSspah4oZckvp6QU5St3lUCKARJ0y/nrUa2Z8bzjSkcBGdyPfycx tH3qAGmIdLFHphpV8+Hx72GJAXFqqlxfWzZokKRzdnvyi9MKx6q8yRkdJqnkGD+TQyXd y5osVq0Vk0V3yNM58h6VH35JoSt3kdYwhVlVSQnoo3dniHP4DEYIa9ttXPS2cYGoSdua QVmjeca3IPqkZLdbjrSf4fHJ3ir6b14qyRXrg3gkzcqTophMMNZU3hW6lprUTLsiEQz2 lxCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h13si22186857iop.97.2021.07.19.20.49.01; Mon, 19 Jul 2021 20:49:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347782AbhGTDGz (ORCPT + 99 others); Mon, 19 Jul 2021 23:06:55 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:44428 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348040AbhGTDGJ (ORCPT ); Mon, 19 Jul 2021 23:06:09 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=tianjia.zhang@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0UgNuUUj_1626752803; Received: from localhost(mailfrom:tianjia.zhang@linux.alibaba.com fp:SMTPD_---0UgNuUUj_1626752803) by smtp.aliyun-inc.com(127.0.0.1); Tue, 20 Jul 2021 11:46:44 +0800 From: Tianjia Zhang To: Herbert Xu , "David S. Miller" , Eric Biggers , Eric Biggers , Gilad Ben-Yossef , Ard Biesheuvel , "Markku-Juhani O . Saarinen" , Jussi Kivilinna , Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jia Zhang , "YiLin . Li" Cc: Tianjia Zhang Subject: [PATCH v3 2/4] crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-generic Date: Tue, 20 Jul 2021 11:46:40 +0800 Message-Id: <20210720034642.19230-3-tianjia.zhang@linux.alibaba.com> X-Mailer: git-send-email 2.19.1.3.ge56e4f7 In-Reply-To: <20210720034642.19230-1-tianjia.zhang@linux.alibaba.com> References: <20210720034642.19230-1-tianjia.zhang@linux.alibaba.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org SM4 library is abstracted from sm4-generic algorithm, sm4-ce can depend on the SM4 library instead of sm4-generic, and some functions in sm4-generic do not need to be exported. Signed-off-by: Tianjia Zhang --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/sm4-ce-glue.c | 20 ++++++++++++++------ crypto/sm4_generic.c | 27 ++++++++++++--------------- include/crypto/sm4.h | 9 ++------- lib/crypto/sm4.c | 2 +- 5 files changed, 30 insertions(+), 30 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index b8eb0453123d..55f19450091b 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -51,7 +51,7 @@ config CRYPTO_SM4_ARM64_CE tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)" depends on KERNEL_MODE_NEON select CRYPTO_ALGAPI - select CRYPTO_SM4 + select CRYPTO_LIB_SM4 config CRYPTO_GHASH_ARM64_CE tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c index 2754c875d39c..9c93cfc4841b 100644 --- a/arch/arm64/crypto/sm4-ce-glue.c +++ b/arch/arm64/crypto/sm4-ce-glue.c @@ -17,12 +17,20 @@ MODULE_LICENSE("GPL v2"); asmlinkage void sm4_ce_do_crypt(const u32 *rk, void *out, const void *in); +static int sm4_ce_setkey(struct crypto_tfm *tfm, const u8 *key, + unsigned int key_len) +{ + struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + + return sm4_expandkey(ctx, key, key_len); +} + static void sm4_ce_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { - const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm); + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); if (!crypto_simd_usable()) { - crypto_sm4_encrypt(tfm, out, in); + sm4_crypt_block(ctx->rkey_enc, out, in); } else { kernel_neon_begin(); sm4_ce_do_crypt(ctx->rkey_enc, out, in); @@ -32,10 +40,10 @@ static void sm4_ce_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) static void sm4_ce_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { - const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm); + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); if (!crypto_simd_usable()) { - crypto_sm4_decrypt(tfm, out, in); + sm4_crypt_block(ctx->rkey_dec, out, in); } else { kernel_neon_begin(); sm4_ce_do_crypt(ctx->rkey_dec, out, in); @@ -49,12 +57,12 @@ static struct crypto_alg sm4_ce_alg = { .cra_priority = 200, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = SM4_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_sm4_ctx), + .cra_ctxsize = sizeof(struct sm4_ctx), .cra_module = THIS_MODULE, .cra_u.cipher = { .cia_min_keysize = SM4_KEY_SIZE, .cia_max_keysize = SM4_KEY_SIZE, - .cia_setkey = crypto_sm4_set_key, + .cia_setkey = sm4_ce_setkey, .cia_encrypt = sm4_ce_encrypt, .cia_decrypt = sm4_ce_decrypt } diff --git a/crypto/sm4_generic.c b/crypto/sm4_generic.c index d19d01f852a9..4a6480a27fee 100644 --- a/crypto/sm4_generic.c +++ b/crypto/sm4_generic.c @@ -17,45 +17,42 @@ #include /** - * crypto_sm4_set_key - Set the SM4 key. + * sm4_setkey - Set the SM4 key. * @tfm: The %crypto_tfm that is used in the context. * @in_key: The input key. * @key_len: The size of the key. * * This function uses sm4_expandkey() to expand the key. - * &crypto_sm4_ctx _must_ be the private data embedded in @tfm which is + * &sm4_ctx _must_ be the private data embedded in @tfm which is * retrieved with crypto_tfm_ctx(). * * Return: 0 on success; -EINVAL on failure (only happens for bad key lengths) */ -int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key, +static int sm4_setkey(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { - struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm); + struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); return sm4_expandkey(ctx, in_key, key_len); } -EXPORT_SYMBOL_GPL(crypto_sm4_set_key); /* encrypt a block of text */ -void crypto_sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { - const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm); + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); sm4_crypt_block(ctx->rkey_enc, out, in); } -EXPORT_SYMBOL_GPL(crypto_sm4_encrypt); /* decrypt a block of text */ -void crypto_sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { - const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm); + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); sm4_crypt_block(ctx->rkey_dec, out, in); } -EXPORT_SYMBOL_GPL(crypto_sm4_decrypt); static struct crypto_alg sm4_alg = { .cra_name = "sm4", @@ -63,15 +60,15 @@ static struct crypto_alg sm4_alg = { .cra_priority = 100, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = SM4_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_sm4_ctx), + .cra_ctxsize = sizeof(struct sm4_ctx), .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = SM4_KEY_SIZE, .cia_max_keysize = SM4_KEY_SIZE, - .cia_setkey = crypto_sm4_set_key, - .cia_encrypt = crypto_sm4_encrypt, - .cia_decrypt = crypto_sm4_decrypt + .cia_setkey = sm4_setkey, + .cia_encrypt = sm4_encrypt, + .cia_decrypt = sm4_decrypt } } }; diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h index 06322325f862..709f286e7b25 100644 --- a/include/crypto/sm4.h +++ b/include/crypto/sm4.h @@ -16,7 +16,7 @@ #define SM4_BLOCK_SIZE 16 #define SM4_RKEY_WORDS 32 -struct crypto_sm4_ctx { +struct sm4_ctx { u32 rkey_enc[SM4_RKEY_WORDS]; u32 rkey_dec[SM4_RKEY_WORDS]; }; @@ -30,7 +30,7 @@ struct crypto_sm4_ctx { * Returns 0 on success. The function fails only if an invalid key size (or * pointer) is supplied. */ -int sm4_expandkey(struct crypto_sm4_ctx *ctx, const u8 *in_key, +int sm4_expandkey(struct sm4_ctx *ctx, const u8 *in_key, unsigned int key_len); /** @@ -41,9 +41,4 @@ int sm4_expandkey(struct crypto_sm4_ctx *ctx, const u8 *in_key, */ void sm4_crypt_block(const u32 *rk, u8 *out, const u8 *in); -int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key, - unsigned int key_len); -void crypto_sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in); -void crypto_sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in); - #endif diff --git a/lib/crypto/sm4.c b/lib/crypto/sm4.c index 5fbf8c741a2f..633b59fed9db 100644 --- a/lib/crypto/sm4.c +++ b/lib/crypto/sm4.c @@ -108,7 +108,7 @@ static inline u32 sm4_round(u32 x0, u32 x1, u32 x2, u32 x3, u32 rk) * Returns 0 on success. The function fails only if an invalid key size (or * pointer) is supplied. */ -int sm4_expandkey(struct crypto_sm4_ctx *ctx, const u8 *in_key, +int sm4_expandkey(struct sm4_ctx *ctx, const u8 *in_key, unsigned int key_len) { u32 rk[4]; -- 2.19.1.3.ge56e4f7