2022-09-26 09:46:53

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 00/16] Optimizing SM3 and SM4 algorithms using NEON/CE/SVE instructions

This series of patches uses different instruction sets to optimize
the SM3 and SM4 algorithms, as well as the optimization of different
modes of SM4.

patch 1-2: NEON instruction set optimization for SM3
patch 3: Refactored and streamlined SM4 NEON instruction implementation
patch 4-5: support test for new SM4 mode
patch 6-8: Refactored and streamlined SM4 CE instruction implementation
patch 9-12: CE accelerated implementation of SM4 CTS/XTS/ESSIV
patch 13: CE accelerated implementation of SM4 CMAC/XCBC/CBCMAC
patch 14-15: CE accelerated implementation of SM4 CCM/GCM
patch 16: SM4 ARMv9 SVE cryptography acceleration implementation


Tianjia Zhang (16):
crypto: arm64/sm3 - raise the priority of the CE implementation
crypto: arm64/sm3 - add NEON assembly implementation
crypto: arm64/sm4 - refactor and simplify NEON implementation
crypto: testmgr - add SM4 cts-cbc/essiv/xts/xcbc test vectors
crypto: tcrypt - add SM4 cts-cbc/essiv/xts/xcbc test
crypto: arm64/sm4 - refactor and simplify CE implementation
crypto: arm64/sm4 - simplify sm4_ce_expand_key() of CE implementation
crypto: arm64/sm4 - export reusable CE acceleration functions
crypto: arm64/sm4 - add CE implementation for CTS-CBC mode
crypto: arm64/sm4 - add CE implementation for XTS mode
crypto: essiv - allow digestsize to be greater than keysize
crypto: arm64/sm4 - add CE implementation for ESSIV mode
crypto: arm64/sm4 - add CE implementation for cmac/xcbc/cbcmac
crypto: arm64/sm4 - add CE implementation for CCM mode
crypto: arm64/sm4 - add CE implementation for GCM mode
crypto: arm64/sm4 - add ARMv9 SVE cryptography acceleration
implementation

arch/arm64/crypto/Kconfig | 66 +-
arch/arm64/crypto/Makefile | 12 +
arch/arm64/crypto/sm3-ce-glue.c | 2 +-
arch/arm64/crypto/sm3-neon-core.S | 600 +++++++++++++
arch/arm64/crypto/sm3-neon-glue.c | 103 +++
arch/arm64/crypto/sm4-ce-asm.h | 209 +++++
arch/arm64/crypto/sm4-ce-ccm-core.S | 328 +++++++
arch/arm64/crypto/sm4-ce-ccm-glue.c | 303 +++++++
arch/arm64/crypto/sm4-ce-core.S | 1247 ++++++++++++++++++---------
arch/arm64/crypto/sm4-ce-gcm-core.S | 741 ++++++++++++++++
arch/arm64/crypto/sm4-ce-gcm-glue.c | 286 ++++++
arch/arm64/crypto/sm4-ce-glue.c | 703 ++++++++++++++-
arch/arm64/crypto/sm4-ce.h | 16 +
arch/arm64/crypto/sm4-neon-core.S | 630 +++++++++-----
arch/arm64/crypto/sm4-neon-glue.c | 172 +---
arch/arm64/crypto/sm4-sve-ce-core.S | 1028 ++++++++++++++++++++++
arch/arm64/crypto/sm4-sve-ce-glue.c | 332 +++++++
crypto/essiv.c | 11 +-
crypto/tcrypt.c | 28 +
crypto/testmgr.c | 25 +
crypto/testmgr.h | 1161 +++++++++++++++++++++++++
21 files changed, 7234 insertions(+), 769 deletions(-)
create mode 100644 arch/arm64/crypto/sm3-neon-core.S
create mode 100644 arch/arm64/crypto/sm3-neon-glue.c
create mode 100644 arch/arm64/crypto/sm4-ce-asm.h
create mode 100644 arch/arm64/crypto/sm4-ce-ccm-core.S
create mode 100644 arch/arm64/crypto/sm4-ce-ccm-glue.c
create mode 100644 arch/arm64/crypto/sm4-ce-gcm-core.S
create mode 100644 arch/arm64/crypto/sm4-ce-gcm-glue.c
create mode 100644 arch/arm64/crypto/sm4-ce.h
create mode 100644 arch/arm64/crypto/sm4-sve-ce-core.S
create mode 100644 arch/arm64/crypto/sm4-sve-ce-glue.c

--
2.24.3 (Apple Git-128)


2022-09-26 09:47:14

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 10/16] crypto: arm64/sm4 - add CE implementation for XTS mode

This patch is a CE-optimized assembly implementation for XTS mode.

Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218 mode of
tcrypt, and compared the performance before and after this patch (the driver
used before this patch is xts(ecb-sm4-ce)). The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:

Before:

xts(ecb-sm4-ce) | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
XTS enc | 117.17 430.56 732.92 1134.98 2007.03 2136.23 2347.20
XTS dec | 116.89 429.02 733.40 1132.96 2006.13 2130.50 2347.92

After:

xts-sm4-ce | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
XTS enc | 224.68 798.91 1248.08 1714.60 2413.73 2467.84 2612.62
XTS dec | 229.85 791.34 1237.79 1720.00 2413.30 2473.84 2611.95

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/Kconfig | 4 +-
arch/arm64/crypto/sm4-ce-core.S | 343 ++++++++++++++++++++++++++++++++
arch/arm64/crypto/sm4-ce-glue.c | 159 ++++++++++++++-
3 files changed, 504 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 4b121dc0cfba..8939f5ae9214 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -231,7 +231,7 @@ config CRYPTO_SM4_ARM64_CE
- NEON (Advanced SIMD) extensions

config CRYPTO_SM4_ARM64_CE_BLK
- tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (ARMv8 Crypto Extensions)"
+ tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR/XTS (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
@@ -242,6 +242,8 @@ config CRYPTO_SM4_ARM64_CE_BLK
- CBC (Cipher Block Chaining) mode (NIST SP800-38A)
- CFB (Cipher Feedback) mode (NIST SP800-38A)
- CTR (Counter) mode (NIST SP800-38A)
+ - XTS (XOR Encrypt XOR with ciphertext stealing) mode (NIST SP800-38E
+ and IEEE 1619)

Architecture: arm64 using:
- ARMv8 Crypto Extensions
diff --git a/arch/arm64/crypto/sm4-ce-core.S b/arch/arm64/crypto/sm4-ce-core.S
index 414d29f8110b..ddd15ec09d38 100644
--- a/arch/arm64/crypto/sm4-ce-core.S
+++ b/arch/arm64/crypto/sm4-ce-core.S
@@ -35,6 +35,7 @@
#define RTMP3 v19

#define RIV v20
+#define RMASK v21


.align 3
@@ -665,6 +666,348 @@ SYM_FUNC_START(sm4_ce_ctr_enc)
SYM_FUNC_END(sm4_ce_ctr_enc)


+#define tweak_next(vt, vin, RTMP) \
+ sshr RTMP.2d, vin.2d, #63; \
+ and RTMP.16b, RTMP.16b, RMASK.16b; \
+ add vt.2d, vin.2d, vin.2d; \
+ ext RTMP.16b, RTMP.16b, RTMP.16b, #8; \
+ eor vt.16b, vt.16b, RTMP.16b;
+
+.align 3
+SYM_FUNC_START(sm4_ce_xts_enc)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: tweak (big endian, 128 bit)
+ * w4: nbytes
+ * x5: round key array for IV
+ */
+ ld1 {v8.16b}, [x3]
+
+ cbz x5, .Lxts_enc_nofirst
+
+ SM4_PREPARE(x5)
+
+ /* Generate first tweak */
+ SM4_CRYPT_BLK(v8)
+
+.Lxts_enc_nofirst:
+ SM4_PREPARE(x0)
+
+ ands w5, w4, #15
+ lsr w4, w4, #4
+ sub w6, w4, #1
+ csel w4, w4, w6, eq
+ uxtw x5, w5
+
+ movi RMASK.2s, #0x1
+ movi RTMP0.2s, #0x87
+ uzp1 RMASK.4s, RMASK.4s, RTMP0.4s
+
+ cbz w4, .Lxts_enc_cts
+
+.Lxts_enc_loop_8x:
+ sub w4, w4, #8
+ tbnz w4, #31, .Lxts_enc_4x
+
+ tweak_next( v9, v8, RTMP0)
+ tweak_next(v10, v9, RTMP1)
+ tweak_next(v11, v10, RTMP2)
+ tweak_next(v12, v11, RTMP3)
+ tweak_next(v13, v12, RTMP0)
+ tweak_next(v14, v13, RTMP1)
+ tweak_next(v15, v14, RTMP2)
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+ ld1 {v4.16b-v7.16b}, [x2], #64
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+
+ SM4_CRYPT_BLK8(v0, v1, v2, v3, v4, v5, v6, v7)
+
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+ st1 {v0.16b-v3.16b}, [x1], #64
+ st1 {v4.16b-v7.16b}, [x1], #64
+
+ tweak_next(v8, v15, RTMP3)
+
+ cbz w4, .Lxts_enc_cts
+ b .Lxts_enc_loop_8x
+
+.Lxts_enc_4x:
+ add w4, w4, #8
+ cmp w4, #4
+ blt .Lxts_enc_loop_1x
+
+ sub w4, w4, #4
+
+ tweak_next( v9, v8, RTMP0)
+ tweak_next(v10, v9, RTMP1)
+ tweak_next(v11, v10, RTMP2)
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+
+ SM4_CRYPT_BLK4(v0, v1, v2, v3)
+
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ st1 {v0.16b-v3.16b}, [x1], #64
+
+ tweak_next(v8, v11, RTMP3)
+
+ cbz w4, .Lxts_enc_cts
+
+.Lxts_enc_loop_1x:
+ sub w4, w4, #1
+
+ ld1 {v0.16b}, [x2], #16
+ eor v0.16b, v0.16b, v8.16b
+
+ SM4_CRYPT_BLK(v0)
+
+ eor v0.16b, v0.16b, v8.16b
+ st1 {v0.16b}, [x1], #16
+
+ tweak_next(v8, v8, RTMP0)
+
+ cbnz w4, .Lxts_enc_loop_1x
+
+.Lxts_enc_cts:
+ cbz x5, .Lxts_enc_end
+
+ /* cipher text stealing */
+
+ tweak_next(v9, v8, RTMP0)
+ ld1 {v0.16b}, [x2]
+ eor v0.16b, v0.16b, v8.16b
+ SM4_CRYPT_BLK(v0)
+ eor v0.16b, v0.16b, v8.16b
+
+ /* load permute table */
+ adr_l x6, .Lcts_permute_table
+ add x7, x6, #32
+ add x6, x6, x5
+ sub x7, x7, x5
+ ld1 {v3.16b}, [x6]
+ ld1 {v4.16b}, [x7]
+
+ /* overlapping loads */
+ add x2, x2, x5
+ ld1 {v1.16b}, [x2]
+
+ /* create Cn from En-1 */
+ tbl v2.16b, {v0.16b}, v3.16b
+ /* padding Pn with En-1 at the end */
+ tbx v0.16b, {v1.16b}, v4.16b
+
+ eor v0.16b, v0.16b, v9.16b
+ SM4_CRYPT_BLK(v0)
+ eor v0.16b, v0.16b, v9.16b
+
+
+ /* overlapping stores */
+ add x5, x1, x5
+ st1 {v2.16b}, [x5]
+ st1 {v0.16b}, [x1]
+
+ b .Lxts_enc_ret
+
+.Lxts_enc_end:
+ /* store new tweak */
+ st1 {v8.16b}, [x3]
+
+.Lxts_enc_ret:
+ ret
+SYM_FUNC_END(sm4_ce_xts_enc)
+
+.align 3
+SYM_FUNC_START(sm4_ce_xts_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: tweak (big endian, 128 bit)
+ * w4: nbytes
+ * x5: round key array for IV
+ */
+ ld1 {v8.16b}, [x3]
+
+ cbz x5, .Lxts_dec_nofirst
+
+ SM4_PREPARE(x5)
+
+ /* Generate first tweak */
+ SM4_CRYPT_BLK(v8)
+
+.Lxts_dec_nofirst:
+ SM4_PREPARE(x0)
+
+ ands w5, w4, #15
+ lsr w4, w4, #4
+ sub w6, w4, #1
+ csel w4, w4, w6, eq
+ uxtw x5, w5
+
+ movi RMASK.2s, #0x1
+ movi RTMP0.2s, #0x87
+ uzp1 RMASK.4s, RMASK.4s, RTMP0.4s
+
+ cbz w4, .Lxts_dec_cts
+
+.Lxts_dec_loop_8x:
+ sub w4, w4, #8
+ tbnz w4, #31, .Lxts_dec_4x
+
+ tweak_next( v9, v8, RTMP0)
+ tweak_next(v10, v9, RTMP1)
+ tweak_next(v11, v10, RTMP2)
+ tweak_next(v12, v11, RTMP3)
+ tweak_next(v13, v12, RTMP0)
+ tweak_next(v14, v13, RTMP1)
+ tweak_next(v15, v14, RTMP2)
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+ ld1 {v4.16b-v7.16b}, [x2], #64
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+
+ SM4_CRYPT_BLK8(v0, v1, v2, v3, v4, v5, v6, v7)
+
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+ st1 {v0.16b-v3.16b}, [x1], #64
+ st1 {v4.16b-v7.16b}, [x1], #64
+
+ tweak_next(v8, v15, RTMP3)
+
+ cbz w4, .Lxts_dec_cts
+ b .Lxts_dec_loop_8x
+
+.Lxts_dec_4x:
+ add w4, w4, #8
+ cmp w4, #4
+ blt .Lxts_dec_loop_1x
+
+ sub w4, w4, #4
+
+ tweak_next( v9, v8, RTMP0)
+ tweak_next(v10, v9, RTMP1)
+ tweak_next(v11, v10, RTMP2)
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+
+ SM4_CRYPT_BLK4(v0, v1, v2, v3)
+
+ eor v0.16b, v0.16b, v8.16b
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ st1 {v0.16b-v3.16b}, [x1], #64
+
+ tweak_next(v8, v11, RTMP3)
+
+ cbz w4, .Lxts_dec_cts
+
+.Lxts_dec_loop_1x:
+ sub w4, w4, #1
+
+ ld1 {v0.16b}, [x2], #16
+ eor v0.16b, v0.16b, v8.16b
+
+ SM4_CRYPT_BLK(v0)
+
+ eor v0.16b, v0.16b, v8.16b
+ st1 {v0.16b}, [x1], #16
+
+ tweak_next(v8, v8, RTMP0)
+
+ cbnz w4, .Lxts_dec_loop_1x
+
+.Lxts_dec_cts:
+ cbz x5, .Lxts_dec_end
+
+ /* cipher text stealing */
+
+ tweak_next(v9, v8, RTMP0)
+ ld1 {v0.16b}, [x2]
+ eor v0.16b, v0.16b, v9.16b
+ SM4_CRYPT_BLK(v0)
+ eor v0.16b, v0.16b, v9.16b
+
+ /* load permute table */
+ adr_l x6, .Lcts_permute_table
+ add x7, x6, #32
+ add x6, x6, x5
+ sub x7, x7, x5
+ ld1 {v3.16b}, [x6]
+ ld1 {v4.16b}, [x7]
+
+ /* overlapping loads */
+ add x2, x2, x5
+ ld1 {v1.16b}, [x2]
+
+ /* create Cn from En-1 */
+ tbl v2.16b, {v0.16b}, v3.16b
+ /* padding Pn with En-1 at the end */
+ tbx v0.16b, {v1.16b}, v4.16b
+
+ eor v0.16b, v0.16b, v8.16b
+ SM4_CRYPT_BLK(v0)
+ eor v0.16b, v0.16b, v8.16b
+
+
+ /* overlapping stores */
+ add x5, x1, x5
+ st1 {v2.16b}, [x5]
+ st1 {v0.16b}, [x1]
+
+ b .Lxts_dec_ret
+
+.Lxts_dec_end:
+ /* store new tweak */
+ st1 {v8.16b}, [x3]
+
+.Lxts_dec_ret:
+ ret
+SYM_FUNC_END(sm4_ce_xts_dec)
+
+
.section ".rodata", "a"
.align 4
.Lbswap128_mask:
diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
index 4d4072c7bfa2..8222766f712a 100644
--- a/arch/arm64/crypto/sm4-ce-glue.c
+++ b/arch/arm64/crypto/sm4-ce-glue.c
@@ -17,6 +17,7 @@
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
#include <crypto/scatterwalk.h>
+#include <crypto/xts.h>
#include <crypto/sm4.h>

#define BYTES2BLKS(nbytes) ((nbytes) >> 4)
@@ -40,12 +41,23 @@ asmlinkage void sm4_ce_cfb_dec(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nblks);
asmlinkage void sm4_ce_ctr_enc(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nblks);
+asmlinkage void sm4_ce_xts_enc(const u32 *rkey1, u8 *dst, const u8 *src,
+ u8 *tweak, unsigned int nbytes,
+ const u32 *rkey2_enc);
+asmlinkage void sm4_ce_xts_dec(const u32 *rkey1, u8 *dst, const u8 *src,
+ u8 *tweak, unsigned int nbytes,
+ const u32 *rkey2_enc);

EXPORT_SYMBOL(sm4_ce_expand_key);
EXPORT_SYMBOL(sm4_ce_crypt_block);
EXPORT_SYMBOL(sm4_ce_cbc_enc);
EXPORT_SYMBOL(sm4_ce_cfb_enc);

+struct sm4_xts_ctx {
+ struct sm4_ctx key1;
+ struct sm4_ctx key2;
+};
+
static int sm4_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int key_len)
{
@@ -61,6 +73,29 @@ static int sm4_setkey(struct crypto_skcipher *tfm, const u8 *key,
return 0;
}

+static int sm4_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int ret;
+
+ if (key_len != SM4_KEY_SIZE * 2)
+ return -EINVAL;
+
+ ret = xts_verify_key(tfm, key, key_len);
+ if (ret)
+ return ret;
+
+ kernel_neon_begin();
+ sm4_ce_expand_key(key, ctx->key1.rkey_enc,
+ ctx->key1.rkey_dec, crypto_sm4_fk, crypto_sm4_ck);
+ sm4_ce_expand_key(&key[SM4_KEY_SIZE], ctx->key2.rkey_enc,
+ ctx->key2.rkey_dec, crypto_sm4_fk, crypto_sm4_ck);
+ kernel_neon_end();
+
+ return 0;
+}
+
static int sm4_ecb_do_crypt(struct skcipher_request *req, const u32 *rkey)
{
struct skcipher_walk walk;
@@ -357,6 +392,111 @@ static int sm4_ctr_crypt(struct skcipher_request *req)
return err;
}

+static int sm4_xts_crypt(struct skcipher_request *req, bool encrypt)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int tail = req->cryptlen % SM4_BLOCK_SIZE;
+ const u32 *rkey2_enc = ctx->key2.rkey_enc;
+ struct scatterlist sg_src[2], sg_dst[2];
+ struct skcipher_request subreq;
+ struct scatterlist *src, *dst;
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ if (req->cryptlen < SM4_BLOCK_SIZE)
+ return -EINVAL;
+
+ err = skcipher_walk_virt(&walk, req, false);
+ if (err)
+ return err;
+
+ if (unlikely(tail > 0 && walk.nbytes < walk.total)) {
+ int nblocks = DIV_ROUND_UP(req->cryptlen, SM4_BLOCK_SIZE) - 2;
+
+ skcipher_walk_abort(&walk);
+
+ skcipher_request_set_tfm(&subreq, tfm);
+ skcipher_request_set_callback(&subreq,
+ skcipher_request_flags(req),
+ NULL, NULL);
+ skcipher_request_set_crypt(&subreq, req->src, req->dst,
+ nblocks * SM4_BLOCK_SIZE, req->iv);
+
+ err = skcipher_walk_virt(&walk, &subreq, false);
+ if (err)
+ return err;
+ } else {
+ tail = 0;
+ }
+
+ while ((nbytes = walk.nbytes) >= SM4_BLOCK_SIZE) {
+ if (nbytes < walk.total)
+ nbytes &= ~(SM4_BLOCK_SIZE - 1);
+
+ kernel_neon_begin();
+
+ if (encrypt)
+ sm4_ce_xts_enc(ctx->key1.rkey_enc, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, nbytes,
+ rkey2_enc);
+ else
+ sm4_ce_xts_dec(ctx->key1.rkey_dec, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, nbytes,
+ rkey2_enc);
+
+ kernel_neon_end();
+
+ rkey2_enc = NULL;
+
+ err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ if (err)
+ return err;
+ }
+
+ if (likely(tail == 0))
+ return 0;
+
+ /* handle ciphertext stealing */
+
+ dst = src = scatterwalk_ffwd(sg_src, req->src, subreq.cryptlen);
+ if (req->dst != req->src)
+ dst = scatterwalk_ffwd(sg_dst, req->dst, subreq.cryptlen);
+
+ skcipher_request_set_crypt(&subreq, src, dst, SM4_BLOCK_SIZE + tail,
+ req->iv);
+
+ err = skcipher_walk_virt(&walk, &subreq, false);
+ if (err)
+ return err;
+
+ kernel_neon_begin();
+
+ if (encrypt)
+ sm4_ce_xts_enc(ctx->key1.rkey_enc, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, walk.nbytes,
+ rkey2_enc);
+ else
+ sm4_ce_xts_dec(ctx->key1.rkey_dec, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, walk.nbytes,
+ rkey2_enc);
+
+ kernel_neon_end();
+
+ return skcipher_walk_done(&walk, 0);
+}
+
+static int sm4_xts_encrypt(struct skcipher_request *req)
+{
+ return sm4_xts_crypt(req, true);
+}
+
+static int sm4_xts_decrypt(struct skcipher_request *req)
+{
+ return sm4_xts_crypt(req, false);
+}
+
static struct skcipher_alg sm4_algs[] = {
{
.base = {
@@ -435,6 +575,22 @@ static struct skcipher_alg sm4_algs[] = {
.setkey = sm4_setkey,
.encrypt = sm4_cbc_cts_encrypt,
.decrypt = sm4_cbc_cts_decrypt,
+ }, {
+ .base = {
+ .cra_name = "xts(sm4)",
+ .cra_driver_name = "xts-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_xts_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE * 2,
+ .max_keysize = SM4_KEY_SIZE * 2,
+ .ivsize = SM4_BLOCK_SIZE,
+ .walksize = SM4_BLOCK_SIZE * 2,
+ .setkey = sm4_xts_setkey,
+ .encrypt = sm4_xts_encrypt,
+ .decrypt = sm4_xts_decrypt,
}
};

@@ -451,7 +607,7 @@ static void __exit sm4_exit(void)
module_cpu_feature_match(SM4, sm4_init);
module_exit(sm4_exit);

-MODULE_DESCRIPTION("SM4 ECB/CBC/CFB/CTR using ARMv8 Crypto Extensions");
+MODULE_DESCRIPTION("SM4 ECB/CBC/CFB/CTR/XTS using ARMv8 Crypto Extensions");
MODULE_ALIAS_CRYPTO("sm4-ce");
MODULE_ALIAS_CRYPTO("sm4");
MODULE_ALIAS_CRYPTO("ecb(sm4)");
@@ -459,5 +615,6 @@ MODULE_ALIAS_CRYPTO("cbc(sm4)");
MODULE_ALIAS_CRYPTO("cfb(sm4)");
MODULE_ALIAS_CRYPTO("ctr(sm4)");
MODULE_ALIAS_CRYPTO("cts(cbc(sm4))");
+MODULE_ALIAS_CRYPTO("xts(sm4)");
MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 09:47:34

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 11/16] crypto: essiv - allow digestsize to be greater than keysize

In essiv mode, the digest of the hash algorithm is used as the key to
encrypt the IV. The current implementation requires that the digest size
of the hash algorithm is equal to the key size, which will exclude
algorithms that do not meet this situation, such as essiv(cbc(sm4),sm3),
the hash result of sm3 is fixed 256 bits, and the key size of sm4
symmetric algorithm is fixed 128 bits, which makes it impossible to use
essiv mode.

This patch allows algorithms whose digest size is greater than key size
to use esssiv mode by truncating the digest.

Signed-off-by: Tianjia Zhang <[email protected]>
---
crypto/essiv.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/crypto/essiv.c b/crypto/essiv.c
index e33369df9034..6ee5a61bcae4 100644
--- a/crypto/essiv.c
+++ b/crypto/essiv.c
@@ -68,6 +68,7 @@ static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
{
struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
u8 salt[HASH_MAX_DIGESTSIZE];
+ unsigned int saltlen;
int err;

crypto_skcipher_clear_flags(tctx->u.skcipher, CRYPTO_TFM_REQ_MASK);
@@ -86,8 +87,11 @@ static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
crypto_cipher_set_flags(tctx->essiv_cipher,
crypto_skcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
- return crypto_cipher_setkey(tctx->essiv_cipher, salt,
- crypto_shash_digestsize(tctx->hash));
+
+ saltlen = min(crypto_shash_digestsize(tctx->hash),
+ crypto_skcipher_max_keysize(tctx->u.skcipher));
+
+ return crypto_cipher_setkey(tctx->essiv_cipher, salt, saltlen);
}

static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key,
@@ -418,8 +422,7 @@ static bool essiv_supported_algorithms(const char *essiv_cipher_name,
if (IS_ERR(alg))
return false;

- if (hash_alg->digestsize < alg->cra_cipher.cia_min_keysize ||
- hash_alg->digestsize > alg->cra_cipher.cia_max_keysize)
+ if (hash_alg->digestsize < alg->cra_cipher.cia_min_keysize)
goto out;

if (ivsize != alg->cra_blocksize)
--
2.24.3 (Apple Git-128)

2022-09-26 09:47:34

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 12/16] crypto: arm64/sm4 - add CE implementation for ESSIV mode

This patch is a CE-optimized assembly implementation for ESSIV mode.
The assembly part is realized by reusing the CBC mode.

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/sm4-ce-core.S | 42 +++++++++++
arch/arm64/crypto/sm4-ce-glue.c | 128 ++++++++++++++++++++++++++++++++
2 files changed, 170 insertions(+)

diff --git a/arch/arm64/crypto/sm4-ce-core.S b/arch/arm64/crypto/sm4-ce-core.S
index ddd15ec09d38..6b923c3209a0 100644
--- a/arch/arm64/crypto/sm4-ce-core.S
+++ b/arch/arm64/crypto/sm4-ce-core.S
@@ -154,6 +154,26 @@ SYM_FUNC_START(sm4_ce_crypt)
ret;
SYM_FUNC_END(sm4_ce_crypt)

+.align 3
+SYM_FUNC_START(sm4_ce_essiv_cbc_enc)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: iv (big endian, 128 bit)
+ * w4: nblocks
+ * x5: round key array for IV
+ */
+ ld1 {RIV.16b}, [x3]
+
+ SM4_PREPARE(x5)
+
+ SM4_CRYPT_BLK(RIV)
+
+ SM4_PREPARE(x0)
+
+ b .Lcbc_enc_loop_4x
+
.align 3
SYM_FUNC_START(sm4_ce_cbc_enc)
/* input:
@@ -208,6 +228,27 @@ SYM_FUNC_START(sm4_ce_cbc_enc)

ret
SYM_FUNC_END(sm4_ce_cbc_enc)
+SYM_FUNC_END(sm4_ce_essiv_cbc_enc)
+
+.align 3
+SYM_FUNC_START(sm4_ce_essiv_cbc_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: iv (big endian, 128 bit)
+ * w4: nblocks
+ * x5: round key array for IV
+ */
+ ld1 {RIV.16b}, [x3]
+
+ SM4_PREPARE(x5)
+
+ SM4_CRYPT_BLK(RIV)
+
+ SM4_PREPARE(x0)
+
+ b .Lcbc_dec_loop_8x

.align 3
SYM_FUNC_START(sm4_ce_cbc_dec)
@@ -306,6 +347,7 @@ SYM_FUNC_START(sm4_ce_cbc_dec)

ret
SYM_FUNC_END(sm4_ce_cbc_dec)
+SYM_FUNC_END(sm4_ce_essiv_cbc_dec)

.align 3
SYM_FUNC_START(sm4_ce_cbc_cts_enc)
diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
index 8222766f712a..6267ec1cfac0 100644
--- a/arch/arm64/crypto/sm4-ce-glue.c
+++ b/arch/arm64/crypto/sm4-ce-glue.c
@@ -19,6 +19,8 @@
#include <crypto/scatterwalk.h>
#include <crypto/xts.h>
#include <crypto/sm4.h>
+#include <crypto/sm3.h>
+#include <crypto/hash.h>

#define BYTES2BLKS(nbytes) ((nbytes) >> 4)

@@ -35,6 +37,12 @@ asmlinkage void sm4_ce_cbc_cts_enc(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nbytes);
asmlinkage void sm4_ce_cbc_cts_dec(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nbytes);
+asmlinkage void sm4_ce_essiv_cbc_enc(const u32 *rkey1, u8 *dst, const u8 *src,
+ u8 *iv, unsigned int nblocks,
+ const u32 *rkey2_enc);
+asmlinkage void sm4_ce_essiv_cbc_dec(const u32 *rkey1, u8 *dst, const u8 *src,
+ u8 *iv, unsigned int nblocks,
+ const u32 *rkey2_enc);
asmlinkage void sm4_ce_cfb_enc(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nblks);
asmlinkage void sm4_ce_cfb_dec(const u32 *rkey, u8 *dst, const u8 *src,
@@ -58,6 +66,12 @@ struct sm4_xts_ctx {
struct sm4_ctx key2;
};

+struct sm4_essiv_cbc_ctx {
+ struct sm4_ctx key1;
+ struct sm4_ctx key2;
+ struct crypto_shash *hash;
+};
+
static int sm4_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int key_len)
{
@@ -96,6 +110,27 @@ static int sm4_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
return 0;
}

+static int sm4_essiv_cbc_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ u8 __aligned(8) digest[SM3_DIGEST_SIZE];
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ crypto_shash_tfm_digest(ctx->hash, key, key_len, digest);
+
+ kernel_neon_begin();
+ sm4_ce_expand_key(key, ctx->key1.rkey_enc,
+ ctx->key1.rkey_dec, crypto_sm4_fk, crypto_sm4_ck);
+ sm4_ce_expand_key(digest, ctx->key2.rkey_enc,
+ ctx->key2.rkey_dec, crypto_sm4_fk, crypto_sm4_ck);
+ kernel_neon_end();
+
+ return 0;
+}
+
static int sm4_ecb_do_crypt(struct skcipher_request *req, const u32 *rkey)
{
struct skcipher_walk walk;
@@ -497,6 +532,81 @@ static int sm4_xts_decrypt(struct skcipher_request *req)
return sm4_xts_crypt(req, false);
}

+static int sm4_essiv_cbc_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct sm4_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ ctx->hash = crypto_alloc_shash("sm3", 0, 0);
+
+ return PTR_ERR_OR_ZERO(ctx->hash);
+}
+
+static void sm4_essiv_cbc_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct sm4_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_shash(ctx->hash);
+}
+
+static int sm4_essiv_cbc_crypt(struct skcipher_request *req, bool encrypt)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nblocks;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ if ((nblocks = walk.nbytes / SM4_BLOCK_SIZE) > 0) {
+ kernel_neon_begin();
+
+ if (encrypt)
+ sm4_ce_essiv_cbc_enc(ctx->key1.rkey_enc,
+ walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv,
+ nblocks, ctx->key2.rkey_enc);
+ else
+ sm4_ce_essiv_cbc_dec(ctx->key1.rkey_dec,
+ walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv,
+ nblocks, ctx->key2.rkey_enc);
+
+ kernel_neon_end();
+
+ err = skcipher_walk_done(&walk, walk.nbytes % SM4_BLOCK_SIZE);
+ if (err)
+ return err;
+ }
+
+ while ((nblocks = walk.nbytes / SM4_BLOCK_SIZE) > 0) {
+ kernel_neon_begin();
+
+ if (encrypt)
+ sm4_ce_cbc_enc(ctx->key1.rkey_enc, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, nblocks);
+ else
+ sm4_ce_cbc_dec(ctx->key1.rkey_dec, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, nblocks);
+
+ kernel_neon_end();
+
+ err = skcipher_walk_done(&walk, walk.nbytes % SM4_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
+static int sm4_essiv_cbc_encrypt(struct skcipher_request *req)
+{
+ return sm4_essiv_cbc_crypt(req, true);
+}
+
+static int sm4_essiv_cbc_decrypt(struct skcipher_request *req)
+{
+ return sm4_essiv_cbc_crypt(req, false);
+}
+
static struct skcipher_alg sm4_algs[] = {
{
.base = {
@@ -591,6 +701,23 @@ static struct skcipher_alg sm4_algs[] = {
.setkey = sm4_xts_setkey,
.encrypt = sm4_xts_encrypt,
.decrypt = sm4_xts_decrypt,
+ }, {
+ .base = {
+ .cra_name = "essiv(cbc(sm4),sm3)",
+ .cra_driver_name = "essiv-cbc-sm4-sm3-ce",
+ .cra_priority = 400 + 1,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_essiv_cbc_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE,
+ .max_keysize = SM4_KEY_SIZE,
+ .ivsize = SM4_BLOCK_SIZE,
+ .setkey = sm4_essiv_cbc_setkey,
+ .encrypt = sm4_essiv_cbc_encrypt,
+ .decrypt = sm4_essiv_cbc_decrypt,
+ .init = sm4_essiv_cbc_init_tfm,
+ .exit = sm4_essiv_cbc_exit_tfm,
}
};

@@ -616,5 +743,6 @@ MODULE_ALIAS_CRYPTO("cfb(sm4)");
MODULE_ALIAS_CRYPTO("ctr(sm4)");
MODULE_ALIAS_CRYPTO("cts(cbc(sm4))");
MODULE_ALIAS_CRYPTO("xts(sm4)");
+MODULE_ALIAS_CRYPTO("essiv(cbc(sm4),sm3)");
MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 09:48:04

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 09/16] crypto: arm64/sm4 - add CE implementation for CTS-CBC mode

This patch is a CE-optimized assembly implementation for CTS-CBC mode.

Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218 mode of
tcrypt, and compared the performance before and after this patch (the driver
used before this patch is cts(cbc-sm4-ce)). The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:

Before:

cts(cbc-sm4-ce) | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
CTS-CBC enc | 286.09 297.17 457.97 627.75 868.58 900.80 957.69
CTS-CBC dec | 286.67 285.63 538.35 947.08 2241.03 2577.32 3391.14

After:

cts-cbc-sm4-ce | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
CTS-CBC enc | 288.19 428.80 593.57 741.04 911.73 931.80 950.00
CTS-CBC dec | 292.22 468.99 838.23 1380.76 2741.17 3036.42 3409.62

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/sm4-ce-core.S | 102 ++++++++++++++++++++++++++++++++
arch/arm64/crypto/sm4-ce-glue.c | 94 +++++++++++++++++++++++++++++
2 files changed, 196 insertions(+)

diff --git a/arch/arm64/crypto/sm4-ce-core.S b/arch/arm64/crypto/sm4-ce-core.S
index 9e4b4f01cdf3..414d29f8110b 100644
--- a/arch/arm64/crypto/sm4-ce-core.S
+++ b/arch/arm64/crypto/sm4-ce-core.S
@@ -306,6 +306,100 @@ SYM_FUNC_START(sm4_ce_cbc_dec)
ret
SYM_FUNC_END(sm4_ce_cbc_dec)

+.align 3
+SYM_FUNC_START(sm4_ce_cbc_cts_enc)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: iv (big endian, 128 bit)
+ * w4: nbytes
+ */
+ SM4_PREPARE(x0)
+
+ sub w5, w4, #16
+ uxtw x5, w5
+
+ ld1 {RIV.16b}, [x3]
+
+ ld1 {v0.16b}, [x2]
+ eor RIV.16b, RIV.16b, v0.16b
+ SM4_CRYPT_BLK(RIV)
+
+ /* load permute table */
+ adr_l x6, .Lcts_permute_table
+ add x7, x6, #32
+ add x6, x6, x5
+ sub x7, x7, x5
+ ld1 {v3.16b}, [x6]
+ ld1 {v4.16b}, [x7]
+
+ /* overlapping loads */
+ add x2, x2, x5
+ ld1 {v1.16b}, [x2]
+
+ /* create Cn from En-1 */
+ tbl v0.16b, {RIV.16b}, v3.16b
+ /* padding Pn with zeros */
+ tbl v1.16b, {v1.16b}, v4.16b
+
+ eor v1.16b, v1.16b, RIV.16b
+ SM4_CRYPT_BLK(v1)
+
+ /* overlapping stores */
+ add x5, x1, x5
+ st1 {v0.16b}, [x5]
+ st1 {v1.16b}, [x1]
+
+ ret
+SYM_FUNC_END(sm4_ce_cbc_cts_enc)
+
+.align 3
+SYM_FUNC_START(sm4_ce_cbc_cts_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: iv (big endian, 128 bit)
+ * w4: nbytes
+ */
+ SM4_PREPARE(x0)
+
+ sub w5, w4, #16
+ uxtw x5, w5
+
+ ld1 {RIV.16b}, [x3]
+
+ /* load permute table */
+ adr_l x6, .Lcts_permute_table
+ add x7, x6, #32
+ add x6, x6, x5
+ sub x7, x7, x5
+ ld1 {v3.16b}, [x6]
+ ld1 {v4.16b}, [x7]
+
+ /* overlapping loads */
+ ld1 {v0.16b}, [x2], x5
+ ld1 {v1.16b}, [x2]
+
+ SM4_CRYPT_BLK(v0)
+ /* select the first Ln bytes of Xn to create Pn */
+ tbl v2.16b, {v0.16b}, v3.16b
+ eor v2.16b, v2.16b, v1.16b
+
+ /* overwrite the first Ln bytes with Cn to create En-1 */
+ tbx v0.16b, {v1.16b}, v4.16b
+ SM4_CRYPT_BLK(v0)
+ eor v0.16b, v0.16b, RIV.16b
+
+ /* overlapping stores */
+ add x5, x1, x5
+ st1 {v2.16b}, [x5]
+ st1 {v0.16b}, [x1]
+
+ ret
+SYM_FUNC_END(sm4_ce_cbc_cts_dec)
+
.align 3
SYM_FUNC_START(sm4_ce_cfb_enc)
/* input:
@@ -576,3 +670,11 @@ SYM_FUNC_END(sm4_ce_ctr_enc)
.Lbswap128_mask:
.byte 0x0c, 0x0d, 0x0e, 0x0f, 0x08, 0x09, 0x0a, 0x0b
.byte 0x04, 0x05, 0x06, 0x07, 0x00, 0x01, 0x02, 0x03
+
+.Lcts_permute_table:
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ .byte 0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7
+ .byte 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
index 63abcadc684b..4d4072c7bfa2 100644
--- a/arch/arm64/crypto/sm4-ce-glue.c
+++ b/arch/arm64/crypto/sm4-ce-glue.c
@@ -16,6 +16,7 @@
#include <asm/simd.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
#include <crypto/sm4.h>

#define BYTES2BLKS(nbytes) ((nbytes) >> 4)
@@ -29,6 +30,10 @@ asmlinkage void sm4_ce_cbc_enc(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nblocks);
asmlinkage void sm4_ce_cbc_dec(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nblocks);
+asmlinkage void sm4_ce_cbc_cts_enc(const u32 *rkey, u8 *dst, const u8 *src,
+ u8 *iv, unsigned int nbytes);
+asmlinkage void sm4_ce_cbc_cts_dec(const u32 *rkey, u8 *dst, const u8 *src,
+ u8 *iv, unsigned int nbytes);
asmlinkage void sm4_ce_cfb_enc(const u32 *rkey, u8 *dst, const u8 *src,
u8 *iv, unsigned int nblks);
asmlinkage void sm4_ce_cfb_dec(const u32 *rkey, u8 *dst, const u8 *src,
@@ -153,6 +158,78 @@ static int sm4_cbc_decrypt(struct skcipher_request *req)
return sm4_cbc_crypt(req, ctx, false);
}

+static int sm4_cbc_cts_crypt(struct skcipher_request *req, bool encrypt)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct scatterlist *src = req->src;
+ struct scatterlist *dst = req->dst;
+ struct scatterlist sg_src[2], sg_dst[2];
+ struct skcipher_request subreq;
+ struct skcipher_walk walk;
+ int cbc_blocks;
+ int err;
+
+ if (req->cryptlen < SM4_BLOCK_SIZE)
+ return -EINVAL;
+
+ if (req->cryptlen == SM4_BLOCK_SIZE)
+ return sm4_cbc_crypt(req, ctx, encrypt);
+
+ skcipher_request_set_tfm(&subreq, tfm);
+ skcipher_request_set_callback(&subreq, skcipher_request_flags(req),
+ NULL, NULL);
+
+ /* handle the CBC cryption part */
+ cbc_blocks = DIV_ROUND_UP(req->cryptlen, SM4_BLOCK_SIZE) - 2;
+ if (cbc_blocks) {
+ skcipher_request_set_crypt(&subreq, src, dst,
+ cbc_blocks * SM4_BLOCK_SIZE,
+ req->iv);
+
+ err = sm4_cbc_crypt(&subreq, ctx, encrypt);
+ if (err)
+ return err;
+
+ dst = src = scatterwalk_ffwd(sg_src, src, subreq.cryptlen);
+ if (req->dst != req->src)
+ dst = scatterwalk_ffwd(sg_dst, req->dst,
+ subreq.cryptlen);
+ }
+
+ /* handle ciphertext stealing */
+ skcipher_request_set_crypt(&subreq, src, dst,
+ req->cryptlen - cbc_blocks * SM4_BLOCK_SIZE,
+ req->iv);
+
+ err = skcipher_walk_virt(&walk, &subreq, false);
+ if (err)
+ return err;
+
+ kernel_neon_begin();
+
+ if (encrypt)
+ sm4_ce_cbc_cts_enc(ctx->rkey_enc, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, walk.nbytes);
+ else
+ sm4_ce_cbc_cts_dec(ctx->rkey_dec, walk.dst.virt.addr,
+ walk.src.virt.addr, walk.iv, walk.nbytes);
+
+ kernel_neon_end();
+
+ return skcipher_walk_done(&walk, 0);
+}
+
+static int sm4_cbc_cts_encrypt(struct skcipher_request *req)
+{
+ return sm4_cbc_cts_crypt(req, true);
+}
+
+static int sm4_cbc_cts_decrypt(struct skcipher_request *req)
+{
+ return sm4_cbc_cts_crypt(req, false);
+}
+
static int sm4_cfb_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -342,6 +419,22 @@ static struct skcipher_alg sm4_algs[] = {
.setkey = sm4_setkey,
.encrypt = sm4_ctr_crypt,
.decrypt = sm4_ctr_crypt,
+ }, {
+ .base = {
+ .cra_name = "cts(cbc(sm4))",
+ .cra_driver_name = "cts-cbc-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE,
+ .max_keysize = SM4_KEY_SIZE,
+ .ivsize = SM4_BLOCK_SIZE,
+ .walksize = SM4_BLOCK_SIZE * 2,
+ .setkey = sm4_setkey,
+ .encrypt = sm4_cbc_cts_encrypt,
+ .decrypt = sm4_cbc_cts_decrypt,
}
};

@@ -365,5 +458,6 @@ MODULE_ALIAS_CRYPTO("ecb(sm4)");
MODULE_ALIAS_CRYPTO("cbc(sm4)");
MODULE_ALIAS_CRYPTO("cfb(sm4)");
MODULE_ALIAS_CRYPTO("ctr(sm4)");
+MODULE_ALIAS_CRYPTO("cts(cbc(sm4))");
MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 09:48:11

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 14/16] crypto: arm64/sm4 - add CE implementation for CCM mode

This patch is a CE-optimized assembly implementation for CCM mode.

Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 223 and 225
modes of tcrypt, and compared the performance before and after this patch (the
driver used before this patch is ccm_base(ctr-sm4-ce,cbcmac-sm4-ce)).
The abscissas are blocks of different lengths. The data is tabulated and the
unit is Mb/s:

Before (rfc4309(ccm_base(ctr-sm4-ce,cbcmac-sm4-ce))):

ccm(sm4) | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------
CCM enc | 35.07 125.40 336.47 468.17 581.97 619.18 712.56 736.01
CCM dec | 34.87 124.40 335.08 466.75 581.04 618.81 712.25 735.89
CCM mb enc | 34.71 123.96 333.92 465.39 579.91 617.49 711.45 734.92
CCM mb dec | 34.42 122.80 331.02 462.81 578.28 616.42 709.88 734.19

After (rfc4309(ccm-sm4-ce)):

ccm-sm4-ce | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------
CCM enc | 77.12 249.82 569.94 725.17 839.27 867.71 952.87 969.89
CCM dec | 75.90 247.26 566.29 722.12 836.90 865.95 951.74 968.57
CCM mb enc | 75.98 245.25 562.91 718.99 834.76 864.70 950.17 967.90
CCM mb dec | 75.06 243.78 560.58 717.13 833.68 862.70 949.35 967.11

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/Kconfig | 16 ++
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/sm4-ce-ccm-core.S | 328 ++++++++++++++++++++++++++++
arch/arm64/crypto/sm4-ce-ccm-glue.c | 303 +++++++++++++++++++++++++
4 files changed, 650 insertions(+)
create mode 100644 arch/arm64/crypto/sm4-ce-ccm-core.S
create mode 100644 arch/arm64/crypto/sm4-ce-ccm-glue.c

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 8939f5ae9214..2611036a3e3f 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -281,6 +281,22 @@ config CRYPTO_AES_ARM64_CE_CCM
- ARMv8 Crypto Extensions
- NEON (Advanced SIMD) extensions

+config CRYPTO_SM4_ARM64_CE_CCM
+ tristate "AEAD cipher: SM4 in CCM mode (ARMv8 Crypto Extensions)"
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_ALGAPI
+ select CRYPTO_AEAD
+ select CRYPTO_SM4
+ select CRYPTO_SM4_ARM64_CE_BLK
+ help
+ AEAD cipher: SM4 cipher algorithms (OSCCA GB/T 32907-2016) with
+ CCM (Counter with Cipher Block Chaining-Message Authentication Code)
+ authenticated encryption mode (NIST SP800-38C)
+
+ Architecture: arm64 using:
+ - ARMv8 Crypto Extensions
+ - NEON (Advanced SIMD) extensions
+
config CRYPTO_CRCT10DIF_ARM64_CE
tristate "CRCT10DIF (PMULL)"
depends on KERNEL_MODE_NEON && CRC_T10DIF
diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
index 087f1625e775..843ea5266965 100644
--- a/arch/arm64/crypto/Makefile
+++ b/arch/arm64/crypto/Makefile
@@ -29,6 +29,9 @@ sm4-ce-cipher-y := sm4-ce-cipher-glue.o sm4-ce-cipher-core.o
obj-$(CONFIG_CRYPTO_SM4_ARM64_CE_BLK) += sm4-ce.o
sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o

+obj-$(CONFIG_CRYPTO_SM4_ARM64_CE_CCM) += sm4-ce-ccm.o
+sm4-ce-ccm-y := sm4-ce-ccm-glue.o sm4-ce-ccm-core.o
+
obj-$(CONFIG_CRYPTO_SM4_ARM64_NEON_BLK) += sm4-neon.o
sm4-neon-y := sm4-neon-glue.o sm4-neon-core.o

diff --git a/arch/arm64/crypto/sm4-ce-ccm-core.S b/arch/arm64/crypto/sm4-ce-ccm-core.S
new file mode 100644
index 000000000000..028207c4afd0
--- /dev/null
+++ b/arch/arm64/crypto/sm4-ce-ccm-core.S
@@ -0,0 +1,328 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4-CCM AEAD Algorithm using ARMv8 Crypto Extensions
+ * as specified in rfc8998
+ * https://datatracker.ietf.org/doc/html/rfc8998
+ *
+ * Copyright (C) 2022 Tianjia Zhang <[email protected]>
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include "sm4-ce-asm.h"
+
+.arch armv8-a+crypto
+
+.irp b, 0, 1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 24, 25, 26, 27, 28, 29, 30, 31
+ .set .Lv\b\().4s, \b
+.endr
+
+.macro sm4e, vd, vn
+ .inst 0xcec08400 | (.L\vn << 5) | .L\vd
+.endm
+
+/* Register macros */
+
+#define RMAC v16
+
+/* Helper macros. */
+
+#define inc_le128(vctr) \
+ mov vctr.d[1], x8; \
+ mov vctr.d[0], x7; \
+ adds x8, x8, #1; \
+ rev64 vctr.16b, vctr.16b; \
+ adc x7, x7, xzr;
+
+
+.align 3
+SYM_FUNC_START(sm4_ce_cbcmac_update)
+ /* input:
+ * x0: round key array, CTX
+ * x1: mac
+ * x2: src
+ * w3: nblocks
+ */
+ SM4_PREPARE(x0)
+
+ ld1 {RMAC.16b}, [x1]
+
+.Lcbcmac_loop_4x:
+ cmp w3, #4
+ blt .Lcbcmac_loop_1x
+
+ sub w3, w3, #4
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v0.16b
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v1.16b
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v2.16b
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v3.16b
+
+ cbz w3, .Lcbcmac_end
+ b .Lcbcmac_loop_4x
+
+.Lcbcmac_loop_1x:
+ sub w3, w3, #1
+
+ ld1 {v0.16b}, [x2], #16
+
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v0.16b
+
+ cbnz w3, .Lcbcmac_loop_1x
+
+.Lcbcmac_end:
+ st1 {RMAC.16b}, [x1]
+ ret
+SYM_FUNC_END(sm4_ce_cbcmac_update)
+
+.align 3
+SYM_FUNC_START(sm4_ce_ccm_final)
+ /* input:
+ * x0: round key array, CTX
+ * x1: ctr0 (big endian, 128 bit)
+ * x2: mac
+ */
+ SM4_PREPARE(x0)
+
+ ld1 {RMAC.16b}, [x2]
+ ld1 {v0.16b}, [x1]
+
+ SM4_CRYPT_BLK2(RMAC, v0)
+
+ /* en-/decrypt the mac with ctr0 */
+ eor RMAC.16b, RMAC.16b, v0.16b
+ st1 {RMAC.16b}, [x2]
+
+ ret
+SYM_FUNC_END(sm4_ce_ccm_final)
+
+.align 3
+SYM_FUNC_START(sm4_ce_ccm_enc)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: ctr (big endian, 128 bit)
+ * w4: nbytes
+ * x5: mac
+ */
+ SM4_PREPARE(x0)
+
+ ldp x7, x8, [x3]
+ rev x7, x7
+ rev x8, x8
+
+ ld1 {RMAC.16b}, [x5]
+
+.Lccm_enc_loop_4x:
+ cmp w4, #(4 * 16)
+ blt .Lccm_enc_loop_1x
+
+ sub w4, w4, #(4 * 16)
+
+ /* construct CTRs */
+ inc_le128(v8) /* +0 */
+ inc_le128(v9) /* +1 */
+ inc_le128(v10) /* +2 */
+ inc_le128(v11) /* +3 */
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+
+ SM4_CRYPT_BLK2(v8, RMAC)
+ eor v8.16b, v8.16b, v0.16b
+ eor RMAC.16b, RMAC.16b, v0.16b
+ SM4_CRYPT_BLK2(v9, RMAC)
+ eor v9.16b, v9.16b, v1.16b
+ eor RMAC.16b, RMAC.16b, v1.16b
+ SM4_CRYPT_BLK2(v10, RMAC)
+ eor v10.16b, v10.16b, v2.16b
+ eor RMAC.16b, RMAC.16b, v2.16b
+ SM4_CRYPT_BLK2(v11, RMAC)
+ eor v11.16b, v11.16b, v3.16b
+ eor RMAC.16b, RMAC.16b, v3.16b
+
+ st1 {v8.16b-v11.16b}, [x1], #64
+
+ cbz w4, .Lccm_enc_end
+ b .Lccm_enc_loop_4x
+
+.Lccm_enc_loop_1x:
+ cmp w4, #16
+ blt .Lccm_enc_tail
+
+ sub w4, w4, #16
+
+ /* construct CTRs */
+ inc_le128(v8)
+
+ ld1 {v0.16b}, [x2], #16
+
+ SM4_CRYPT_BLK2(v8, RMAC)
+ eor v8.16b, v8.16b, v0.16b
+ eor RMAC.16b, RMAC.16b, v0.16b
+
+ st1 {v8.16b}, [x1], #16
+
+ cbz w4, .Lccm_enc_end
+ b .Lccm_enc_loop_1x
+
+.Lccm_enc_tail:
+ /* construct CTRs */
+ inc_le128(v8)
+
+ SM4_CRYPT_BLK2(RMAC, v8)
+
+ /* store new MAC */
+ st1 {RMAC.16b}, [x5]
+
+.Lccm_enc_tail_loop:
+ ldrb w0, [x2], #1 /* get 1 byte from input */
+ umov w9, v8.b[0] /* get top crypted CTR byte */
+ umov w6, RMAC.b[0] /* get top MAC byte */
+
+ eor w9, w9, w0 /* w9 = CTR ^ input */
+ eor w6, w6, w0 /* w6 = MAC ^ input */
+
+ strb w9, [x1], #1 /* store out byte */
+ strb w6, [x5], #1 /* store MAC byte */
+
+ subs w4, w4, #1
+ beq .Lccm_enc_ret
+
+ /* shift out one byte */
+ ext RMAC.16b, RMAC.16b, RMAC.16b, #1
+ ext v8.16b, v8.16b, v8.16b, #1
+
+ b .Lccm_enc_tail_loop
+
+.Lccm_enc_end:
+ /* store new MAC */
+ st1 {RMAC.16b}, [x5]
+
+ /* store new CTR */
+ rev x7, x7
+ rev x8, x8
+ stp x7, x8, [x3]
+
+.Lccm_enc_ret:
+ ret
+SYM_FUNC_END(sm4_ce_ccm_enc)
+
+.align 3
+SYM_FUNC_START(sm4_ce_ccm_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: ctr (big endian, 128 bit)
+ * w4: nbytes
+ * x5: mac
+ */
+ SM4_PREPARE(x0)
+
+ ldp x7, x8, [x3]
+ rev x7, x7
+ rev x8, x8
+
+ ld1 {RMAC.16b}, [x5]
+
+.Lccm_dec_loop_4x:
+ cmp w4, #(4 * 16)
+ blt .Lccm_dec_loop_1x
+
+ sub w4, w4, #(4 * 16)
+
+ /* construct CTRs */
+ inc_le128(v8) /* +0 */
+ inc_le128(v9) /* +1 */
+ inc_le128(v10) /* +2 */
+ inc_le128(v11) /* +3 */
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+
+ SM4_CRYPT_BLK2(v8, RMAC)
+ eor v8.16b, v8.16b, v0.16b
+ eor RMAC.16b, RMAC.16b, v8.16b
+ SM4_CRYPT_BLK2(v9, RMAC)
+ eor v9.16b, v9.16b, v1.16b
+ eor RMAC.16b, RMAC.16b, v9.16b
+ SM4_CRYPT_BLK2(v10, RMAC)
+ eor v10.16b, v10.16b, v2.16b
+ eor RMAC.16b, RMAC.16b, v10.16b
+ SM4_CRYPT_BLK2(v11, RMAC)
+ eor v11.16b, v11.16b, v3.16b
+ eor RMAC.16b, RMAC.16b, v11.16b
+
+ st1 {v8.16b-v11.16b}, [x1], #64
+
+ cbz w4, .Lccm_dec_end
+ b .Lccm_dec_loop_4x
+
+.Lccm_dec_loop_1x:
+ cmp w4, #16
+ blt .Lccm_dec_tail
+
+ sub w4, w4, #16
+
+ /* construct CTRs */
+ inc_le128(v8)
+
+ ld1 {v0.16b}, [x2], #16
+
+ SM4_CRYPT_BLK2(v8, RMAC)
+ eor v8.16b, v8.16b, v0.16b
+ eor RMAC.16b, RMAC.16b, v8.16b
+
+ st1 {v8.16b}, [x1], #16
+
+ cbz w4, .Lccm_dec_end
+ b .Lccm_dec_loop_1x
+
+.Lccm_dec_tail:
+ /* construct CTRs */
+ inc_le128(v8)
+
+ SM4_CRYPT_BLK2(RMAC, v8)
+
+ /* store new MAC */
+ st1 {RMAC.16b}, [x5]
+
+.Lccm_dec_tail_loop:
+ ldrb w0, [x2], #1 /* get 1 byte from input */
+ umov w9, v8.b[0] /* get top crypted CTR byte */
+ umov w6, RMAC.b[0] /* get top MAC byte */
+
+ eor w9, w9, w0 /* w9 = CTR ^ input */
+ eor w6, w6, w9 /* w6 = MAC ^ output */
+
+ strb w9, [x1], #1 /* store out byte */
+ strb w6, [x5], #1 /* store MAC byte */
+
+ subs w4, w4, #1
+ beq .Lccm_dec_ret
+
+ /* shift out one byte */
+ ext RMAC.16b, RMAC.16b, RMAC.16b, #1
+ ext v8.16b, v8.16b, v8.16b, #1
+
+ b .Lccm_dec_tail_loop
+
+.Lccm_dec_end:
+ /* store new MAC */
+ st1 {RMAC.16b}, [x5]
+
+ /* store new CTR */
+ rev x7, x7
+ rev x8, x8
+ stp x7, x8, [x3]
+
+.Lccm_dec_ret:
+ ret
+SYM_FUNC_END(sm4_ce_ccm_dec)
diff --git a/arch/arm64/crypto/sm4-ce-ccm-glue.c b/arch/arm64/crypto/sm4-ce-ccm-glue.c
new file mode 100644
index 000000000000..f2cec7b52efc
--- /dev/null
+++ b/arch/arm64/crypto/sm4-ce-ccm-glue.c
@@ -0,0 +1,303 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4-CCM AEAD Algorithm using ARMv8 Crypto Extensions
+ * as specified in rfc8998
+ * https://datatracker.ietf.org/doc/html/rfc8998
+ *
+ * Copyright (C) 2022 Tianjia Zhang <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/crypto.h>
+#include <linux/kernel.h>
+#include <linux/cpufeature.h>
+#include <asm/neon.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/sm4.h>
+#include "sm4-ce.h"
+
+asmlinkage void sm4_ce_cbcmac_update(const u32 *rkey_enc, u8 *mac,
+ const u8 *src, unsigned int nblocks);
+asmlinkage void sm4_ce_ccm_enc(const u32 *rkey_enc, u8 *dst, const u8 *src,
+ u8 *iv, unsigned int nbytes, u8 *mac);
+asmlinkage void sm4_ce_ccm_dec(const u32 *rkey_enc, u8 *dst, const u8 *src,
+ u8 *iv, unsigned int nbytes, u8 *mac);
+asmlinkage void sm4_ce_ccm_final(const u32 *rkey_enc, u8 *iv, u8 *mac);
+
+
+static int ccm_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_ctx *ctx = crypto_aead_ctx(tfm);
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ kernel_neon_begin();
+ sm4_ce_expand_key(key, ctx->rkey_enc, ctx->rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+ kernel_neon_end();
+
+ return 0;
+}
+
+static int ccm_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
+{
+ if ((authsize & 1) || authsize < 4)
+ return -EINVAL;
+ return 0;
+}
+
+static int ccm_format_input(u8 info[], struct aead_request *req,
+ unsigned int msglen)
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ unsigned int l = req->iv[0] + 1;
+ unsigned int m;
+ __be32 len;
+
+ /* verify that CCM dimension 'L': 2 <= L <= 8 */
+ if (l < 2 || l > 8)
+ return -EINVAL;
+ if (l < 4 && msglen >> (8 * l))
+ return -EOVERFLOW;
+
+ memset(&req->iv[SM4_BLOCK_SIZE - l], 0, l);
+
+ memcpy(info, req->iv, SM4_BLOCK_SIZE);
+
+ m = crypto_aead_authsize(aead);
+
+ /* format flags field per RFC 3610/NIST 800-38C */
+ *info |= ((m - 2) / 2) << 3;
+ if (req->assoclen)
+ *info |= (1 << 6);
+
+ /*
+ * format message length field,
+ * Linux uses a u32 type to represent msglen
+ */
+ if (l >= 4)
+ l = 4;
+
+ len = cpu_to_be32(msglen);
+ memcpy(&info[SM4_BLOCK_SIZE - l], (u8 *)&len + 4 - l, l);
+
+ return 0;
+}
+
+static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_aead_ctx(aead);
+ struct __packed { __be16 l; __be32 h; } aadlen;
+ u32 assoclen = req->assoclen;
+ struct scatter_walk walk;
+ unsigned int len;
+
+ if (assoclen < 0xff00) {
+ aadlen.l = cpu_to_be16(assoclen);
+ len = 2;
+ } else {
+ aadlen.l = cpu_to_be16(0xfffe);
+ put_unaligned_be32(assoclen, &aadlen.h);
+ len = 6;
+ }
+
+ sm4_ce_crypt_block(ctx->rkey_enc, mac, mac);
+ crypto_xor(mac, (const u8 *)&aadlen, len);
+
+ scatterwalk_start(&walk, req->src);
+
+ do {
+ u32 n = scatterwalk_clamp(&walk, assoclen);
+ u8 *p, *ptr;
+
+ if (!n) {
+ scatterwalk_start(&walk, sg_next(walk.sg));
+ n = scatterwalk_clamp(&walk, assoclen);
+ }
+
+ p = ptr = scatterwalk_map(&walk);
+ assoclen -= n;
+ scatterwalk_advance(&walk, n);
+
+ while (n > 0) {
+ unsigned int l, nblocks;
+
+ if (len == SM4_BLOCK_SIZE) {
+ if (n < SM4_BLOCK_SIZE) {
+ sm4_ce_crypt_block(ctx->rkey_enc,
+ mac, mac);
+
+ len = 0;
+ } else {
+ nblocks = n / SM4_BLOCK_SIZE;
+ sm4_ce_cbcmac_update(ctx->rkey_enc,
+ mac, ptr, nblocks);
+
+ ptr += nblocks * SM4_BLOCK_SIZE;
+ n %= SM4_BLOCK_SIZE;
+
+ continue;
+ }
+ }
+
+ l = min(n, SM4_BLOCK_SIZE - len);
+ if (l) {
+ crypto_xor(mac + len, ptr, l);
+ len += l;
+ ptr += l;
+ n -= l;
+ }
+ }
+
+ scatterwalk_unmap(p);
+ scatterwalk_done(&walk, 0, assoclen);
+ } while (assoclen);
+}
+
+static int ccm_crypt(struct aead_request *req, struct skcipher_walk *walk,
+ u32 *rkey_enc, u8 mac[],
+ void (*sm4_ce_ccm_crypt)(const u32 *rkey_enc, u8 *dst,
+ const u8 *src, u8 *iv,
+ unsigned int nbytes, u8 *mac))
+{
+ u8 __aligned(8) ctr0[SM4_BLOCK_SIZE];
+ int err;
+
+ /* preserve the initial ctr0 for the TAG */
+ memcpy(ctr0, walk->iv, SM4_BLOCK_SIZE);
+ crypto_inc(walk->iv, SM4_BLOCK_SIZE);
+
+ kernel_neon_begin();
+
+ if (req->assoclen)
+ ccm_calculate_auth_mac(req, mac);
+
+ do {
+ unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE;
+ const u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+
+ if (walk->nbytes == walk->total)
+ tail = 0;
+
+ if (walk->nbytes - tail)
+ sm4_ce_ccm_crypt(rkey_enc, dst, src, walk->iv,
+ walk->nbytes - tail, mac);
+
+ if (walk->nbytes == walk->total)
+ sm4_ce_ccm_final(rkey_enc, ctr0, mac);
+
+ kernel_neon_end();
+
+ if (walk->nbytes) {
+ err = skcipher_walk_done(walk, tail);
+ if (err)
+ return err;
+ if (walk->nbytes)
+ kernel_neon_begin();
+ }
+ } while (walk->nbytes > 0);
+
+ return 0;
+}
+
+static int ccm_encrypt(struct aead_request *req)
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_aead_ctx(aead);
+ u8 __aligned(8) mac[SM4_BLOCK_SIZE];
+ struct skcipher_walk walk;
+ int err;
+
+ err = ccm_format_input(mac, req, req->cryptlen);
+ if (err)
+ return err;
+
+ err = skcipher_walk_aead_encrypt(&walk, req, false);
+ if (err)
+ return err;
+
+ err = ccm_crypt(req, &walk, ctx->rkey_enc, mac, sm4_ce_ccm_enc);
+ if (err)
+ return err;
+
+ /* copy authtag to end of dst */
+ scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen,
+ crypto_aead_authsize(aead), 1);
+
+ return 0;
+}
+
+static int ccm_decrypt(struct aead_request *req)
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ unsigned int authsize = crypto_aead_authsize(aead);
+ struct sm4_ctx *ctx = crypto_aead_ctx(aead);
+ u8 __aligned(8) mac[SM4_BLOCK_SIZE];
+ u8 authtag[SM4_BLOCK_SIZE];
+ struct skcipher_walk walk;
+ int err;
+
+ err = ccm_format_input(mac, req, req->cryptlen - authsize);
+ if (err)
+ return err;
+
+ err = skcipher_walk_aead_decrypt(&walk, req, false);
+ if (err)
+ return err;
+
+ err = ccm_crypt(req, &walk, ctx->rkey_enc, mac, sm4_ce_ccm_dec);
+ if (err)
+ return err;
+
+ /* compare calculated auth tag with the stored one */
+ scatterwalk_map_and_copy(authtag, req->src,
+ req->assoclen + req->cryptlen - authsize,
+ authsize, 0);
+
+ if (crypto_memneq(authtag, mac, authsize))
+ return -EBADMSG;
+
+ return 0;
+}
+
+static struct aead_alg sm4_ccm_alg = {
+ .base = {
+ .cra_name = "ccm(sm4)",
+ .cra_driver_name = "ccm-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .ivsize = SM4_BLOCK_SIZE,
+ .chunksize = SM4_BLOCK_SIZE,
+ .maxauthsize = SM4_BLOCK_SIZE,
+ .setkey = ccm_setkey,
+ .setauthsize = ccm_setauthsize,
+ .encrypt = ccm_encrypt,
+ .decrypt = ccm_decrypt,
+};
+
+static int __init sm4_ce_ccm_init(void)
+{
+ return crypto_register_aead(&sm4_ccm_alg);
+}
+
+static void __exit sm4_ce_ccm_exit(void)
+{
+ crypto_unregister_aead(&sm4_ccm_alg);
+}
+
+module_cpu_feature_match(SM4, sm4_ce_ccm_init);
+module_exit(sm4_ce_ccm_exit);
+
+MODULE_DESCRIPTION("Synchronous SM4 in CCM mode using ARMv8 Crypto Extensions");
+MODULE_ALIAS_CRYPTO("ccm(sm4)");
+MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
+MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 09:48:19

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 13/16] crypto: arm64/sm4 - add CE implementation for cmac/xcbc/cbcmac

This patch is a CE-optimized assembly implementation for cmac/xcbc/cbcmac.

Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 300 mode of
tcrypt, and compared the performance before and after this patch (the driver
used before this patch is XXXmac(sm4-ce)). The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:

Before:

update-size | 16 64 256 1024 2048 4096 8192
---------------+--------------------------------------------------------
cmac(sm4-ce) | 293.33 403.69 503.76 527.78 531.10 535.46 535.81
xcbc(sm4-ce) | 292.83 402.50 504.02 529.08 529.87 536.55 538.24
cbcmac(sm4-ce) | 318.42 415.79 497.12 515.05 523.15 521.19 523.01

After:

update-size | 16 64 256 1024 2048 4096 8192
---------------+--------------------------------------------------------
cmac-sm4-ce | 371.99 675.28 903.56 971.65 980.57 990.40 991.04
xcbc-sm4-ce | 372.11 674.55 903.47 971.61 980.96 990.42 991.10
cbcmac-sm4-ce | 371.63 675.33 903.23 972.07 981.42 990.93 991.45

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/sm4-ce-core.S | 70 +++++++++
arch/arm64/crypto/sm4-ce-glue.c | 267 +++++++++++++++++++++++++++++++-
2 files changed, 336 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/crypto/sm4-ce-core.S b/arch/arm64/crypto/sm4-ce-core.S
index 6b923c3209a0..69fe3b90b7ad 100644
--- a/arch/arm64/crypto/sm4-ce-core.S
+++ b/arch/arm64/crypto/sm4-ce-core.S
@@ -35,6 +35,7 @@
#define RTMP3 v19

#define RIV v20
+#define RMAC v20
#define RMASK v21


@@ -1049,6 +1050,75 @@ SYM_FUNC_START(sm4_ce_xts_dec)
ret
SYM_FUNC_END(sm4_ce_xts_dec)

+.align 3
+SYM_FUNC_START(sm4_ce_mac_update)
+ /* input:
+ * x0: round key array, CTX
+ * x1: digest
+ * x2: src
+ * w3: nblocks
+ * w4: enc_before
+ * w5: enc_after
+ */
+ SM4_PREPARE(x0)
+
+ ld1 {RMAC.16b}, [x1]
+
+ cbz w4, .Lmac_update
+
+ SM4_CRYPT_BLK(RMAC)
+
+.Lmac_update:
+ cbz w3, .Lmac_ret
+
+ sub w6, w3, #1
+ cmp w5, wzr
+ csel w3, w3, w6, ne
+
+ cbz w3, .Lmac_end
+
+.Lmac_loop_4x:
+ cmp w3, #4
+ blt .Lmac_loop_1x
+
+ sub w3, w3, #4
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+
+ eor RMAC.16b, RMAC.16b, v0.16b
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v1.16b
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v2.16b
+ SM4_CRYPT_BLK(RMAC)
+ eor RMAC.16b, RMAC.16b, v3.16b
+ SM4_CRYPT_BLK(RMAC)
+
+ cbz w3, .Lmac_end
+ b .Lmac_loop_4x
+
+.Lmac_loop_1x:
+ sub w3, w3, #1
+
+ ld1 {v0.16b}, [x2], #16
+
+ eor RMAC.16b, RMAC.16b, v0.16b
+ SM4_CRYPT_BLK(RMAC)
+
+ cbnz w3, .Lmac_loop_1x
+
+
+.Lmac_end:
+ cbnz w5, .Lmac_ret
+
+ ld1 {v0.16b}, [x2], #16
+ eor RMAC.16b, RMAC.16b, v0.16b
+
+.Lmac_ret:
+ st1 {RMAC.16b}, [x1]
+ ret
+SYM_FUNC_END(sm4_ce_mac_update)
+

.section ".rodata", "a"
.align 4
diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
index 6267ec1cfac0..c2d10b8e92b2 100644
--- a/arch/arm64/crypto/sm4-ce-glue.c
+++ b/arch/arm64/crypto/sm4-ce-glue.c
@@ -14,8 +14,10 @@
#include <linux/cpufeature.h>
#include <asm/neon.h>
#include <asm/simd.h>
+#include <crypto/b128ops.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
+#include <crypto/internal/hash.h>
#include <crypto/scatterwalk.h>
#include <crypto/xts.h>
#include <crypto/sm4.h>
@@ -55,6 +57,9 @@ asmlinkage void sm4_ce_xts_enc(const u32 *rkey1, u8 *dst, const u8 *src,
asmlinkage void sm4_ce_xts_dec(const u32 *rkey1, u8 *dst, const u8 *src,
u8 *tweak, unsigned int nbytes,
const u32 *rkey2_enc);
+asmlinkage void sm4_ce_mac_update(const u32 *rkey_enc, u8 *digest,
+ const u8 *src, unsigned int nblocks,
+ bool enc_before, bool enc_after);

EXPORT_SYMBOL(sm4_ce_expand_key);
EXPORT_SYMBOL(sm4_ce_crypt_block);
@@ -72,6 +77,16 @@ struct sm4_essiv_cbc_ctx {
struct crypto_shash *hash;
};

+struct sm4_mac_tfm_ctx {
+ struct sm4_ctx key;
+ u8 __aligned(8) consts[];
+};
+
+struct sm4_mac_desc_ctx {
+ unsigned int len;
+ u8 digest[SM4_BLOCK_SIZE];
+};
+
static int sm4_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int key_len)
{
@@ -721,13 +736,260 @@ static struct skcipher_alg sm4_algs[] = {
}
};

+static int sm4_cbcmac_setkey(struct crypto_shash *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_mac_tfm_ctx *ctx = crypto_shash_ctx(tfm);
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ kernel_neon_begin();
+ sm4_ce_expand_key(key, ctx->key.rkey_enc, ctx->key.rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+ kernel_neon_end();
+
+ return 0;
+}
+
+static int sm4_cmac_setkey(struct crypto_shash *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_mac_tfm_ctx *ctx = crypto_shash_ctx(tfm);
+ be128 *consts = (be128 *)ctx->consts;
+ u64 a, b;
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ memset(consts, 0, SM4_BLOCK_SIZE);
+
+ kernel_neon_begin();
+
+ sm4_ce_expand_key(key, ctx->key.rkey_enc, ctx->key.rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+
+ /* encrypt the zero block */
+ sm4_ce_crypt_block(ctx->key.rkey_enc, (u8 *)consts, (const u8 *)consts);
+
+ kernel_neon_end();
+
+ /* gf(2^128) multiply zero-ciphertext with u and u^2 */
+ a = be64_to_cpu(consts[0].a);
+ b = be64_to_cpu(consts[0].b);
+ consts[0].a = cpu_to_be64((a << 1) | (b >> 63));
+ consts[0].b = cpu_to_be64((b << 1) ^ ((a >> 63) ? 0x87 : 0));
+
+ a = be64_to_cpu(consts[0].a);
+ b = be64_to_cpu(consts[0].b);
+ consts[1].a = cpu_to_be64((a << 1) | (b >> 63));
+ consts[1].b = cpu_to_be64((b << 1) ^ ((a >> 63) ? 0x87 : 0));
+
+ return 0;
+}
+
+static int sm4_xcbc_setkey(struct crypto_shash *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_mac_tfm_ctx *ctx = crypto_shash_ctx(tfm);
+ u8 __aligned(8) key2[SM4_BLOCK_SIZE];
+ static u8 const ks[3][SM4_BLOCK_SIZE] = {
+ { [0 ... SM4_BLOCK_SIZE - 1] = 0x1},
+ { [0 ... SM4_BLOCK_SIZE - 1] = 0x2},
+ { [0 ... SM4_BLOCK_SIZE - 1] = 0x3},
+ };
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ kernel_neon_begin();
+
+ sm4_ce_expand_key(key, ctx->key.rkey_enc, ctx->key.rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+
+ sm4_ce_crypt_block(ctx->key.rkey_enc, key2, ks[0]);
+ sm4_ce_crypt(ctx->key.rkey_enc, ctx->consts, ks[1], 2);
+
+ sm4_ce_expand_key(key2, ctx->key.rkey_enc, ctx->key.rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+
+ kernel_neon_end();
+
+ return 0;
+}
+
+static int sm4_mac_init(struct shash_desc *desc)
+{
+ struct sm4_mac_desc_ctx *ctx = shash_desc_ctx(desc);
+
+ memset(ctx->digest, 0, SM4_BLOCK_SIZE);
+ ctx->len = 0;
+
+ return 0;
+}
+
+static int sm4_mac_update(struct shash_desc *desc, const u8 *p,
+ unsigned int len)
+{
+ struct sm4_mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ struct sm4_mac_desc_ctx *ctx = shash_desc_ctx(desc);
+ unsigned int l, nblocks;
+
+ if (len == 0)
+ return 0;
+
+ if (ctx->len || ctx->len + len < SM4_BLOCK_SIZE) {
+ l = min(len, SM4_BLOCK_SIZE - ctx->len);
+
+ crypto_xor(ctx->digest + ctx->len, p, l);
+ ctx->len += l;
+ len -= l;
+ p += l;
+ }
+
+ if (len && (ctx->len % SM4_BLOCK_SIZE) == 0) {
+ kernel_neon_begin();
+
+ if (len < SM4_BLOCK_SIZE && ctx->len == SM4_BLOCK_SIZE) {
+ sm4_ce_crypt_block(tctx->key.rkey_enc,
+ ctx->digest, ctx->digest);
+ ctx->len = 0;
+ } else {
+ nblocks = len / SM4_BLOCK_SIZE;
+ len %= SM4_BLOCK_SIZE;
+
+ sm4_ce_mac_update(tctx->key.rkey_enc, ctx->digest, p,
+ nblocks, (ctx->len == SM4_BLOCK_SIZE),
+ (len != 0));
+
+ p += nblocks * SM4_BLOCK_SIZE;
+
+ if (len == 0)
+ ctx->len = SM4_BLOCK_SIZE;
+ }
+
+ kernel_neon_end();
+
+ if (len) {
+ crypto_xor(ctx->digest, p, len);
+ ctx->len = len;
+ }
+ }
+
+ return 0;
+}
+
+static int sm4_cmac_final(struct shash_desc *desc, u8 *out)
+{
+ struct sm4_mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ struct sm4_mac_desc_ctx *ctx = shash_desc_ctx(desc);
+ const u8 *consts = tctx->consts;
+
+ if (ctx->len != SM4_BLOCK_SIZE) {
+ ctx->digest[ctx->len] ^= 0x80;
+ consts += SM4_BLOCK_SIZE;
+ }
+
+ kernel_neon_begin();
+ sm4_ce_mac_update(tctx->key.rkey_enc, ctx->digest, consts, 1,
+ false, true);
+ kernel_neon_end();
+
+ memcpy(out, ctx->digest, SM4_BLOCK_SIZE);
+
+ return 0;
+}
+
+static int sm4_cbcmac_final(struct shash_desc *desc, u8 *out)
+{
+ struct sm4_mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ struct sm4_mac_desc_ctx *ctx = shash_desc_ctx(desc);
+
+ if (ctx->len) {
+ kernel_neon_begin();
+ sm4_ce_crypt_block(tctx->key.rkey_enc, ctx->digest,
+ ctx->digest);
+ kernel_neon_end();
+ }
+
+ memcpy(out, ctx->digest, SM4_BLOCK_SIZE);
+
+ return 0;
+}
+
+static struct shash_alg sm4_mac_algs[] = {
+ {
+ .base = {
+ .cra_name = "cmac(sm4)",
+ .cra_driver_name = "cmac-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_mac_tfm_ctx)
+ + SM4_BLOCK_SIZE * 2,
+ .cra_module = THIS_MODULE,
+ },
+ .digestsize = SM4_BLOCK_SIZE,
+ .init = sm4_mac_init,
+ .update = sm4_mac_update,
+ .final = sm4_cmac_final,
+ .setkey = sm4_cmac_setkey,
+ .descsize = sizeof(struct sm4_mac_desc_ctx),
+ }, {
+ .base = {
+ .cra_name = "xcbc(sm4)",
+ .cra_driver_name = "xcbc-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_mac_tfm_ctx)
+ + SM4_BLOCK_SIZE * 2,
+ .cra_module = THIS_MODULE,
+ },
+ .digestsize = SM4_BLOCK_SIZE,
+ .init = sm4_mac_init,
+ .update = sm4_mac_update,
+ .final = sm4_cmac_final,
+ .setkey = sm4_xcbc_setkey,
+ .descsize = sizeof(struct sm4_mac_desc_ctx),
+ }, {
+ .base = {
+ .cra_name = "cbcmac(sm4)",
+ .cra_driver_name = "cbcmac-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct sm4_mac_tfm_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .digestsize = SM4_BLOCK_SIZE,
+ .init = sm4_mac_init,
+ .update = sm4_mac_update,
+ .final = sm4_cbcmac_final,
+ .setkey = sm4_cbcmac_setkey,
+ .descsize = sizeof(struct sm4_mac_desc_ctx),
+ }
+};
+
static int __init sm4_init(void)
{
- return crypto_register_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
+ int err;
+
+ err = crypto_register_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
+ if (err)
+ return err;
+
+ err = crypto_register_shashes(sm4_mac_algs, ARRAY_SIZE(sm4_mac_algs));
+ if (err)
+ goto out_err;
+
+ return 0;
+
+out_err:
+ crypto_unregister_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
+ return err;
}

static void __exit sm4_exit(void)
{
+ crypto_unregister_shashes(sm4_mac_algs, ARRAY_SIZE(sm4_mac_algs));
crypto_unregister_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
}

@@ -744,5 +1006,8 @@ MODULE_ALIAS_CRYPTO("ctr(sm4)");
MODULE_ALIAS_CRYPTO("cts(cbc(sm4))");
MODULE_ALIAS_CRYPTO("xts(sm4)");
MODULE_ALIAS_CRYPTO("essiv(cbc(sm4),sm3)");
+MODULE_ALIAS_CRYPTO("cmac(sm4)");
+MODULE_ALIAS_CRYPTO("xcbc(sm4)");
+MODULE_ALIAS_CRYPTO("cbcmac(sm4)");
MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 09:48:28

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 15/16] crypto: arm64/sm4 - add CE implementation for GCM mode

This patch is a CE-optimized assembly implementation for GCM mode.

Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 224 and 224
modes of tcrypt, and compared the performance before and after this patch (the
driver used before this patch is gcm_base(ctr-sm4-ce,ghash-generic)).
The abscissas are blocks of different lengths. The data is tabulated and the
unit is Mb/s:

Before (gcm_base(ctr-sm4-ce,ghash-generic)):

gcm(sm4) | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------------
GCM enc | 25.24 64.65 104.66 116.69 123.81 125.12 129.67 130.62
GCM dec | 25.40 64.80 104.74 116.70 123.81 125.21 129.68 130.59
GCM mb enc | 24.95 64.06 104.20 116.38 123.55 124.97 129.63 130.61
GCM mb dec | 24.92 64.00 104.13 116.34 123.55 124.98 129.56 130.48

After:

gcm-sm4-ce | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------------
GCM enc | 108.62 397.18 971.60 1283.92 1522.77 1513.39 1777.00 1806.96
GCM dec | 116.36 398.14 1004.27 1319.11 1624.21 1635.43 1932.54 1974.20
GCM mb enc | 107.13 391.79 962.05 1274.94 1514.76 1508.57 1769.07 1801.58
GCM mb dec | 113.40 389.36 988.51 1307.68 1619.10 1631.55 1931.70 1970.86

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/Kconfig | 16 +
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/sm4-ce-gcm-core.S | 741 ++++++++++++++++++++++++++++
arch/arm64/crypto/sm4-ce-gcm-glue.c | 286 +++++++++++
4 files changed, 1046 insertions(+)
create mode 100644 arch/arm64/crypto/sm4-ce-gcm-core.S
create mode 100644 arch/arm64/crypto/sm4-ce-gcm-glue.c

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 2611036a3e3f..6793d5bc3ee5 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -297,6 +297,22 @@ config CRYPTO_SM4_ARM64_CE_CCM
- ARMv8 Crypto Extensions
- NEON (Advanced SIMD) extensions

+config CRYPTO_SM4_ARM64_CE_GCM
+ tristate "AEAD cipher: SM4 in GCM mode (ARMv8 Crypto Extensions)"
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_ALGAPI
+ select CRYPTO_AEAD
+ select CRYPTO_SM4
+ select CRYPTO_SM4_ARM64_CE_BLK
+ help
+ AEAD cipher: SM4 cipher algorithms (OSCCA GB/T 32907-2016) with
+ GCM (Galois/Counter Mode) authenticated encryption mode (NIST SP800-38D)
+
+ Architecture: arm64 using:
+ - ARMv8 Crypto Extensions
+ - PMULL (Polynomial Multiply Long) instructions
+ - NEON (Advanced SIMD) extensions
+
config CRYPTO_CRCT10DIF_ARM64_CE
tristate "CRCT10DIF (PMULL)"
depends on KERNEL_MODE_NEON && CRC_T10DIF
diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
index 843ea5266965..4818e204c2ac 100644
--- a/arch/arm64/crypto/Makefile
+++ b/arch/arm64/crypto/Makefile
@@ -32,6 +32,9 @@ sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o
obj-$(CONFIG_CRYPTO_SM4_ARM64_CE_CCM) += sm4-ce-ccm.o
sm4-ce-ccm-y := sm4-ce-ccm-glue.o sm4-ce-ccm-core.o

+obj-$(CONFIG_CRYPTO_SM4_ARM64_CE_GCM) += sm4-ce-gcm.o
+sm4-ce-gcm-y := sm4-ce-gcm-glue.o sm4-ce-gcm-core.o
+
obj-$(CONFIG_CRYPTO_SM4_ARM64_NEON_BLK) += sm4-neon.o
sm4-neon-y := sm4-neon-glue.o sm4-neon-core.o

diff --git a/arch/arm64/crypto/sm4-ce-gcm-core.S b/arch/arm64/crypto/sm4-ce-gcm-core.S
new file mode 100644
index 000000000000..7aa3ec18a289
--- /dev/null
+++ b/arch/arm64/crypto/sm4-ce-gcm-core.S
@@ -0,0 +1,741 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4-GCM AEAD Algorithm using ARMv8 Crypto Extensions
+ * as specified in rfc8998
+ * https://datatracker.ietf.org/doc/html/rfc8998
+ *
+ * Copyright (C) 2016 Jussi Kivilinna <[email protected]>
+ * Copyright (C) 2022 Tianjia Zhang <[email protected]>
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include "sm4-ce-asm.h"
+
+.arch armv8-a+crypto
+
+.irp b, 0, 1, 2, 3, 24, 25, 26, 27, 28, 29, 30, 31
+ .set .Lv\b\().4s, \b
+.endr
+
+.macro sm4e, vd, vn
+ .inst 0xcec08400 | (.L\vn << 5) | .L\vd
+.endm
+
+/* Register macros */
+
+/* Used for both encryption and decryption */
+#define RHASH v21
+#define RRCONST v22
+#define RZERO v23
+
+/* Helper macros. */
+
+/*
+ * input: m0, m1
+ * output: r0:r1 (low 128-bits in r0, high in r1)
+ */
+#define PMUL_128x128(r0, r1, m0, m1, T0, T1) \
+ ext T0.16b, m1.16b, m1.16b, #8; \
+ pmull r0.1q, m0.1d, m1.1d; \
+ pmull T1.1q, m0.1d, T0.1d; \
+ pmull2 T0.1q, m0.2d, T0.2d; \
+ pmull2 r1.1q, m0.2d, m1.2d; \
+ eor T0.16b, T0.16b, T1.16b; \
+ ext T1.16b, RZERO.16b, T0.16b, #8; \
+ ext T0.16b, T0.16b, RZERO.16b, #8; \
+ eor r0.16b, r0.16b, T1.16b; \
+ eor r1.16b, r1.16b, T0.16b;
+
+#define PMUL_128x128_4x(r0, r1, m0, m1, T0, T1, \
+ r2, r3, m2, m3, T2, T3, \
+ r4, r5, m4, m5, T4, T5, \
+ r6, r7, m6, m7, T6, T7) \
+ ext T0.16b, m1.16b, m1.16b, #8; \
+ ext T2.16b, m3.16b, m3.16b, #8; \
+ ext T4.16b, m5.16b, m5.16b, #8; \
+ ext T6.16b, m7.16b, m7.16b, #8; \
+ pmull r0.1q, m0.1d, m1.1d; \
+ pmull r2.1q, m2.1d, m3.1d; \
+ pmull r4.1q, m4.1d, m5.1d; \
+ pmull r6.1q, m6.1d, m7.1d; \
+ pmull T1.1q, m0.1d, T0.1d; \
+ pmull T3.1q, m2.1d, T2.1d; \
+ pmull T5.1q, m4.1d, T4.1d; \
+ pmull T7.1q, m6.1d, T6.1d; \
+ pmull2 T0.1q, m0.2d, T0.2d; \
+ pmull2 T2.1q, m2.2d, T2.2d; \
+ pmull2 T4.1q, m4.2d, T4.2d; \
+ pmull2 T6.1q, m6.2d, T6.2d; \
+ pmull2 r1.1q, m0.2d, m1.2d; \
+ pmull2 r3.1q, m2.2d, m3.2d; \
+ pmull2 r5.1q, m4.2d, m5.2d; \
+ pmull2 r7.1q, m6.2d, m7.2d; \
+ eor T0.16b, T0.16b, T1.16b; \
+ eor T2.16b, T2.16b, T3.16b; \
+ eor T4.16b, T4.16b, T5.16b; \
+ eor T6.16b, T6.16b, T7.16b; \
+ ext T1.16b, RZERO.16b, T0.16b, #8; \
+ ext T3.16b, RZERO.16b, T2.16b, #8; \
+ ext T5.16b, RZERO.16b, T4.16b, #8; \
+ ext T7.16b, RZERO.16b, T6.16b, #8; \
+ ext T0.16b, T0.16b, RZERO.16b, #8; \
+ ext T2.16b, T2.16b, RZERO.16b, #8; \
+ ext T4.16b, T4.16b, RZERO.16b, #8; \
+ ext T6.16b, T6.16b, RZERO.16b, #8; \
+ eor r0.16b, r0.16b, T1.16b; \
+ eor r2.16b, r2.16b, T3.16b; \
+ eor r4.16b, r4.16b, T5.16b; \
+ eor r6.16b, r6.16b, T7.16b; \
+ eor r1.16b, r1.16b, T0.16b; \
+ eor r3.16b, r3.16b, T2.16b; \
+ eor r5.16b, r5.16b, T4.16b; \
+ eor r7.16b, r7.16b, T6.16b;
+
+/*
+ * input: r0:r1 (low 128-bits in r0, high in r1)
+ * output: a
+ */
+#define REDUCTION(a, r0, r1, rconst, T0, T1) \
+ pmull2 T0.1q, r1.2d, rconst.2d; \
+ ext T1.16b, T0.16b, RZERO.16b, #8; \
+ ext T0.16b, RZERO.16b, T0.16b, #8; \
+ eor r1.16b, r1.16b, T1.16b; \
+ eor r0.16b, r0.16b, T0.16b; \
+ pmull T0.1q, r1.1d, rconst.1d; \
+ eor a.16b, r0.16b, T0.16b;
+
+#define SM4_CRYPT_PMUL_128x128_BLK(b0, r0, r1, m0, m1, T0, T1) \
+ rev32 b0.16b, b0.16b; \
+ ext T0.16b, m1.16b, m1.16b, #8; \
+ sm4e b0.4s, v24.4s; \
+ pmull r0.1q, m0.1d, m1.1d; \
+ sm4e b0.4s, v25.4s; \
+ pmull T1.1q, m0.1d, T0.1d; \
+ sm4e b0.4s, v26.4s; \
+ pmull2 T0.1q, m0.2d, T0.2d; \
+ sm4e b0.4s, v27.4s; \
+ pmull2 r1.1q, m0.2d, m1.2d; \
+ sm4e b0.4s, v28.4s; \
+ eor T0.16b, T0.16b, T1.16b; \
+ sm4e b0.4s, v29.4s; \
+ ext T1.16b, RZERO.16b, T0.16b, #8; \
+ sm4e b0.4s, v30.4s; \
+ ext T0.16b, T0.16b, RZERO.16b, #8; \
+ sm4e b0.4s, v31.4s; \
+ eor r0.16b, r0.16b, T1.16b; \
+ rev64 b0.4s, b0.4s; \
+ eor r1.16b, r1.16b, T0.16b; \
+ ext b0.16b, b0.16b, b0.16b, #8; \
+ rev32 b0.16b, b0.16b;
+
+#define SM4_CRYPT_PMUL_128x128_BLK3(b0, b1, b2, \
+ r0, r1, m0, m1, T0, T1, \
+ r2, r3, m2, m3, T2, T3, \
+ r4, r5, m4, m5, T4, T5) \
+ rev32 b0.16b, b0.16b; \
+ rev32 b1.16b, b1.16b; \
+ rev32 b2.16b, b2.16b; \
+ ext T0.16b, m1.16b, m1.16b, #8; \
+ ext T2.16b, m3.16b, m3.16b, #8; \
+ ext T4.16b, m5.16b, m5.16b, #8; \
+ sm4e b0.4s, v24.4s; \
+ sm4e b1.4s, v24.4s; \
+ sm4e b2.4s, v24.4s; \
+ pmull r0.1q, m0.1d, m1.1d; \
+ pmull r2.1q, m2.1d, m3.1d; \
+ pmull r4.1q, m4.1d, m5.1d; \
+ sm4e b0.4s, v25.4s; \
+ sm4e b1.4s, v25.4s; \
+ sm4e b2.4s, v25.4s; \
+ pmull T1.1q, m0.1d, T0.1d; \
+ pmull T3.1q, m2.1d, T2.1d; \
+ pmull T5.1q, m4.1d, T4.1d; \
+ sm4e b0.4s, v26.4s; \
+ sm4e b1.4s, v26.4s; \
+ sm4e b2.4s, v26.4s; \
+ pmull2 T0.1q, m0.2d, T0.2d; \
+ pmull2 T2.1q, m2.2d, T2.2d; \
+ pmull2 T4.1q, m4.2d, T4.2d; \
+ sm4e b0.4s, v27.4s; \
+ sm4e b1.4s, v27.4s; \
+ sm4e b2.4s, v27.4s; \
+ pmull2 r1.1q, m0.2d, m1.2d; \
+ pmull2 r3.1q, m2.2d, m3.2d; \
+ pmull2 r5.1q, m4.2d, m5.2d; \
+ sm4e b0.4s, v28.4s; \
+ sm4e b1.4s, v28.4s; \
+ sm4e b2.4s, v28.4s; \
+ eor T0.16b, T0.16b, T1.16b; \
+ eor T2.16b, T2.16b, T3.16b; \
+ eor T4.16b, T4.16b, T5.16b; \
+ sm4e b0.4s, v29.4s; \
+ sm4e b1.4s, v29.4s; \
+ sm4e b2.4s, v29.4s; \
+ ext T1.16b, RZERO.16b, T0.16b, #8; \
+ ext T3.16b, RZERO.16b, T2.16b, #8; \
+ ext T5.16b, RZERO.16b, T4.16b, #8; \
+ sm4e b0.4s, v30.4s; \
+ sm4e b1.4s, v30.4s; \
+ sm4e b2.4s, v30.4s; \
+ ext T0.16b, T0.16b, RZERO.16b, #8; \
+ ext T2.16b, T2.16b, RZERO.16b, #8; \
+ ext T4.16b, T4.16b, RZERO.16b, #8; \
+ sm4e b0.4s, v31.4s; \
+ sm4e b1.4s, v31.4s; \
+ sm4e b2.4s, v31.4s; \
+ eor r0.16b, r0.16b, T1.16b; \
+ eor r2.16b, r2.16b, T3.16b; \
+ eor r4.16b, r4.16b, T5.16b; \
+ rev64 b0.4s, b0.4s; \
+ rev64 b1.4s, b1.4s; \
+ rev64 b2.4s, b2.4s; \
+ eor r1.16b, r1.16b, T0.16b; \
+ eor r3.16b, r3.16b, T2.16b; \
+ eor r5.16b, r5.16b, T4.16b; \
+ ext b0.16b, b0.16b, b0.16b, #8; \
+ ext b1.16b, b1.16b, b1.16b, #8; \
+ ext b2.16b, b2.16b, b2.16b, #8; \
+ eor r0.16b, r0.16b, r2.16b; \
+ eor r1.16b, r1.16b, r3.16b; \
+ rev32 b0.16b, b0.16b; \
+ rev32 b1.16b, b1.16b; \
+ rev32 b2.16b, b2.16b; \
+ eor r0.16b, r0.16b, r4.16b; \
+ eor r1.16b, r1.16b, r5.16b;
+
+#define inc32_le128(vctr) \
+ mov vctr.d[1], x9; \
+ add w6, w9, #1; \
+ mov vctr.d[0], x8; \
+ bfi x9, x6, #0, #32; \
+ rev64 vctr.16b, vctr.16b;
+
+#define GTAG_HASH_LENGTHS(vctr0, vlen) \
+ ld1 {vlen.16b}, [x7]; \
+ /* construct CTR0 */ \
+ /* the lower 32-bits of initial IV is always be32(1) */ \
+ mov x6, #0x1; \
+ bfi x9, x6, #0, #32; \
+ mov vctr0.d[0], x8; \
+ mov vctr0.d[1], x9; \
+ rbit vlen.16b, vlen.16b; \
+ rev64 vctr0.16b, vctr0.16b; \
+ /* authtag = GCTR(CTR0, GHASH) */ \
+ eor RHASH.16b, RHASH.16b, vlen.16b; \
+ SM4_CRYPT_PMUL_128x128_BLK(vctr0, RR0, RR1, RHASH, RH1, \
+ RTMP0, RTMP1); \
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP2, RTMP3); \
+ rbit RHASH.16b, RHASH.16b; \
+ eor RHASH.16b, RHASH.16b, vctr0.16b;
+
+
+/* Register macros for encrypt and ghash */
+
+/* can be the same as input v0-v3 */
+#define RR1 v0
+#define RR3 v1
+#define RR5 v2
+#define RR7 v3
+
+#define RR0 v4
+#define RR2 v5
+#define RR4 v6
+#define RR6 v7
+
+#define RTMP0 v8
+#define RTMP1 v9
+#define RTMP2 v10
+#define RTMP3 v11
+#define RTMP4 v12
+#define RTMP5 v13
+#define RTMP6 v14
+#define RTMP7 v15
+
+#define RH1 v16
+#define RH2 v17
+#define RH3 v18
+#define RH4 v19
+
+.align 3
+SYM_FUNC_START(sm4_ce_pmull_ghash_setup)
+ /* input:
+ * x0: round key array, CTX
+ * x1: ghash table
+ */
+ SM4_PREPARE(x0)
+
+ adr_l x2, .Lghash_rconst
+ ld1r {RRCONST.2d}, [x2]
+
+ eor RZERO.16b, RZERO.16b, RZERO.16b
+
+ /* H = E(K, 0^128) */
+ rev32 v0.16b, RZERO.16b
+ SM4_CRYPT_BLK_BE(v0)
+
+ /* H ^ 1 */
+ rbit RH1.16b, v0.16b
+
+ /* H ^ 2 */
+ PMUL_128x128(RR0, RR1, RH1, RH1, RTMP0, RTMP1)
+ REDUCTION(RH2, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+ /* H ^ 3 */
+ PMUL_128x128(RR0, RR1, RH2, RH1, RTMP0, RTMP1)
+ REDUCTION(RH3, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+ /* H ^ 4 */
+ PMUL_128x128(RR0, RR1, RH2, RH2, RTMP0, RTMP1)
+ REDUCTION(RH4, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+ st1 {RH1.16b-RH4.16b}, [x1]
+
+ ret
+SYM_FUNC_END(sm4_ce_pmull_ghash_setup)
+
+.align 3
+SYM_FUNC_START(pmull_ghash_update)
+ /* input:
+ * x0: ghash table
+ * x1: ghash result
+ * x2: src
+ * w3: nblocks
+ */
+ ld1 {RH1.16b-RH4.16b}, [x0]
+
+ ld1 {RHASH.16b}, [x1]
+ rbit RHASH.16b, RHASH.16b
+
+ adr_l x4, .Lghash_rconst
+ ld1r {RRCONST.2d}, [x4]
+
+ eor RZERO.16b, RZERO.16b, RZERO.16b
+
+.Lghash_loop_4x:
+ cmp w3, #4
+ blt .Lghash_loop_1x
+
+ sub w3, w3, #4
+
+ ld1 {v0.16b-v3.16b}, [x2], #64
+
+ rbit v0.16b, v0.16b
+ rbit v1.16b, v1.16b
+ rbit v2.16b, v2.16b
+ rbit v3.16b, v3.16b
+
+ /*
+ * (in0 ^ HASH) * H^4 => rr0:rr1
+ * (in1) * H^3 => rr2:rr3
+ * (in2) * H^2 => rr4:rr5
+ * (in3) * H^1 => rr6:rr7
+ */
+ eor RHASH.16b, RHASH.16b, v0.16b
+
+ PMUL_128x128_4x(RR0, RR1, RHASH, RH4, RTMP0, RTMP1,
+ RR2, RR3, v1, RH3, RTMP2, RTMP3,
+ RR4, RR5, v2, RH2, RTMP4, RTMP5,
+ RR6, RR7, v3, RH1, RTMP6, RTMP7)
+
+ eor RR0.16b, RR0.16b, RR2.16b
+ eor RR1.16b, RR1.16b, RR3.16b
+ eor RR0.16b, RR0.16b, RR4.16b
+ eor RR1.16b, RR1.16b, RR5.16b
+ eor RR0.16b, RR0.16b, RR6.16b
+ eor RR1.16b, RR1.16b, RR7.16b
+
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP0, RTMP1)
+
+ cbz w3, .Lghash_end
+ b .Lghash_loop_4x
+
+.Lghash_loop_1x:
+ sub w3, w3, #1
+
+ ld1 {v0.16b}, [x2], #16
+ rbit v0.16b, v0.16b
+ eor RHASH.16b, RHASH.16b, v0.16b
+
+ PMUL_128x128(RR0, RR1, RHASH, RH1, RTMP0, RTMP1)
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+ cbnz w3, .Lghash_loop_1x
+
+.Lghash_end:
+ rbit RHASH.16b, RHASH.16b
+ st1 {RHASH.2d}, [x1]
+
+ ret
+SYM_FUNC_END(pmull_ghash_update)
+
+.align 3
+SYM_FUNC_START(sm4_ce_pmull_gcm_enc)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: ctr (big endian, 128 bit)
+ * w4: nbytes
+ * x5: ghash result
+ * x6: ghash table
+ * x7: lengths (only for last block)
+ */
+ SM4_PREPARE(x0)
+
+ ldp x8, x9, [x3]
+ rev x8, x8
+ rev x9, x9
+
+ ld1 {RH1.16b-RH4.16b}, [x6]
+
+ ld1 {RHASH.16b}, [x5]
+ rbit RHASH.16b, RHASH.16b
+
+ adr_l x6, .Lghash_rconst
+ ld1r {RRCONST.2d}, [x6]
+
+ eor RZERO.16b, RZERO.16b, RZERO.16b
+
+ cbz w4, .Lgcm_enc_hash_len
+
+.Lgcm_enc_loop_4x:
+ cmp w4, #(4 * 16)
+ blt .Lgcm_enc_loop_1x
+
+ sub w4, w4, #(4 * 16)
+
+ /* construct CTRs */
+ inc32_le128(v0) /* +0 */
+ inc32_le128(v1) /* +1 */
+ inc32_le128(v2) /* +2 */
+ inc32_le128(v3) /* +3 */
+
+ ld1 {RTMP0.16b-RTMP3.16b}, [x2], #64
+
+ SM4_CRYPT_BLK4(v0, v1, v2, v3)
+
+ eor v0.16b, v0.16b, RTMP0.16b
+ eor v1.16b, v1.16b, RTMP1.16b
+ eor v2.16b, v2.16b, RTMP2.16b
+ eor v3.16b, v3.16b, RTMP3.16b
+ st1 {v0.16b-v3.16b}, [x1], #64
+
+ /* ghash update */
+
+ rbit v0.16b, v0.16b
+ rbit v1.16b, v1.16b
+ rbit v2.16b, v2.16b
+ rbit v3.16b, v3.16b
+
+ /*
+ * (in0 ^ HASH) * H^4 => rr0:rr1
+ * (in1) * H^3 => rr2:rr3
+ * (in2) * H^2 => rr4:rr5
+ * (in3) * H^1 => rr6:rr7
+ */
+ eor RHASH.16b, RHASH.16b, v0.16b
+
+ PMUL_128x128_4x(RR0, RR1, RHASH, RH4, RTMP0, RTMP1,
+ RR2, RR3, v1, RH3, RTMP2, RTMP3,
+ RR4, RR5, v2, RH2, RTMP4, RTMP5,
+ RR6, RR7, v3, RH1, RTMP6, RTMP7)
+
+ eor RR0.16b, RR0.16b, RR2.16b
+ eor RR1.16b, RR1.16b, RR3.16b
+ eor RR0.16b, RR0.16b, RR4.16b
+ eor RR1.16b, RR1.16b, RR5.16b
+ eor RR0.16b, RR0.16b, RR6.16b
+ eor RR1.16b, RR1.16b, RR7.16b
+
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP0, RTMP1)
+
+ cbz w4, .Lgcm_enc_hash_len
+ b .Lgcm_enc_loop_4x
+
+.Lgcm_enc_loop_1x:
+ cmp w4, #16
+ blt .Lgcm_enc_tail
+
+ sub w4, w4, #16
+
+ /* construct CTRs */
+ inc32_le128(v0)
+
+ ld1 {RTMP0.16b}, [x2], #16
+
+ SM4_CRYPT_BLK(v0)
+
+ eor v0.16b, v0.16b, RTMP0.16b
+ st1 {v0.16b}, [x1], #16
+
+ /* ghash update */
+ rbit v0.16b, v0.16b
+ eor RHASH.16b, RHASH.16b, v0.16b
+ PMUL_128x128(RR0, RR1, RHASH, RH1, RTMP0, RTMP1)
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+ cbz w4, .Lgcm_enc_hash_len
+ b .Lgcm_enc_loop_1x
+
+.Lgcm_enc_tail:
+ /* construct CTRs */
+ inc32_le128(v0)
+ SM4_CRYPT_BLK(v0)
+
+ /* load permute table */
+ adr_l x0, .Lcts_permute_table
+ add x0, x0, #32
+ sub x0, x0, w4, uxtw
+ ld1 {v3.16b}, [x0]
+
+.Lgcm_enc_tail_loop:
+ /* do encrypt */
+ ldrb w0, [x2], #1 /* get 1 byte from input */
+ umov w6, v0.b[0] /* get top crypted byte */
+ eor w6, w6, w0 /* w6 = CTR ^ input */
+ strb w6, [x1], #1 /* store out byte */
+
+ /* shift right out one byte */
+ ext v0.16b, v0.16b, v0.16b, #1
+ /* the last ciphertext is placed in high bytes */
+ ins v0.b[15], w6
+
+ subs w4, w4, #1
+ bne .Lgcm_enc_tail_loop
+
+ /* padding last block with zeros */
+ tbl v0.16b, {v0.16b}, v3.16b
+
+ /* ghash update */
+ rbit v0.16b, v0.16b
+ eor RHASH.16b, RHASH.16b, v0.16b
+ PMUL_128x128(RR0, RR1, RHASH, RH1, RTMP0, RTMP1)
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+.Lgcm_enc_hash_len:
+ cbz x7, .Lgcm_enc_end
+
+ GTAG_HASH_LENGTHS(v1, v3)
+
+ b .Lgcm_enc_ret
+
+.Lgcm_enc_end:
+ /* store new CTR */
+ rev x8, x8
+ rev x9, x9
+ stp x8, x9, [x3]
+
+ rbit RHASH.16b, RHASH.16b
+
+.Lgcm_enc_ret:
+ /* store new MAC */
+ st1 {RHASH.2d}, [x5]
+
+ ret
+SYM_FUNC_END(sm4_ce_pmull_gcm_enc)
+
+#undef RR1
+#undef RR3
+#undef RR5
+#undef RR7
+#undef RR0
+#undef RR2
+#undef RR4
+#undef RR6
+#undef RTMP0
+#undef RTMP1
+#undef RTMP2
+#undef RTMP3
+#undef RTMP4
+#undef RTMP5
+#undef RTMP6
+#undef RTMP7
+#undef RH1
+#undef RH2
+#undef RH3
+#undef RH4
+
+
+/* Register macros for decrypt */
+
+/* v0-v2 for building CTRs, v3-v5 for saving inputs */
+
+#define RR1 v6
+#define RR3 v7
+#define RR5 v8
+
+#define RR0 v9
+#define RR2 v10
+#define RR4 v11
+
+#define RTMP0 v12
+#define RTMP1 v13
+#define RTMP2 v14
+#define RTMP3 v15
+#define RTMP4 v16
+#define RTMP5 v17
+
+#define RH1 v18
+#define RH2 v19
+#define RH3 v20
+
+.align 3
+SYM_FUNC_START(sm4_ce_pmull_gcm_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: ctr (big endian, 128 bit)
+ * w4: nbytes
+ * x5: ghash result
+ * x6: ghash table
+ * x7: lengths (only for last block)
+ */
+ SM4_PREPARE(x0)
+
+ ldp x8, x9, [x3]
+ rev x8, x8
+ rev x9, x9
+
+ ld1 {RH1.16b-RH3.16b}, [x6]
+
+ ld1 {RHASH.16b}, [x5]
+ rbit RHASH.16b, RHASH.16b
+
+ adr_l x6, .Lghash_rconst
+ ld1r {RRCONST.2d}, [x6]
+
+ eor RZERO.16b, RZERO.16b, RZERO.16b
+
+ cbz w4, .Lgcm_dec_hash_len
+
+.Lgcm_dec_loop_3x:
+ cmp w4, #(3 * 16)
+ blt .Lgcm_dec_loop_1x
+
+ sub w4, w4, #(3 * 16)
+
+ ld1 {v3.16b-v5.16b}, [x2], #(3 * 16)
+
+ /* construct CTRs */
+ inc32_le128(v0) /* +0 */
+ rbit v6.16b, v3.16b
+ inc32_le128(v1) /* +1 */
+ rbit v7.16b, v4.16b
+ inc32_le128(v2) /* +2 */
+ rbit v8.16b, v5.16b
+
+ eor RHASH.16b, RHASH.16b, v6.16b
+
+ /* decrypt & ghash update */
+ SM4_CRYPT_PMUL_128x128_BLK3(v0, v1, v2,
+ RR0, RR1, RHASH, RH3, RTMP0, RTMP1,
+ RR2, RR3, v7, RH2, RTMP2, RTMP3,
+ RR4, RR5, v8, RH1, RTMP4, RTMP5)
+
+ eor v0.16b, v0.16b, v3.16b
+ eor v1.16b, v1.16b, v4.16b
+ eor v2.16b, v2.16b, v5.16b
+
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP0, RTMP1)
+
+ st1 {v0.16b-v2.16b}, [x1], #(3 * 16)
+
+ cbz w4, .Lgcm_dec_hash_len
+ b .Lgcm_dec_loop_3x
+
+.Lgcm_dec_loop_1x:
+ cmp w4, #16
+ blt .Lgcm_dec_tail
+
+ sub w4, w4, #16
+
+ ld1 {v3.16b}, [x2], #16
+
+ /* construct CTRs */
+ inc32_le128(v0)
+ rbit v6.16b, v3.16b
+
+ eor RHASH.16b, RHASH.16b, v6.16b
+
+ SM4_CRYPT_PMUL_128x128_BLK(v0, RR0, RR1, RHASH, RH1, RTMP0, RTMP1)
+
+ eor v0.16b, v0.16b, v3.16b
+
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+ st1 {v0.16b}, [x1], #16
+
+ cbz w4, .Lgcm_dec_hash_len
+ b .Lgcm_dec_loop_1x
+
+.Lgcm_dec_tail:
+ /* construct CTRs */
+ inc32_le128(v0)
+ SM4_CRYPT_BLK(v0)
+
+ /* load permute table */
+ adr_l x0, .Lcts_permute_table
+ add x0, x0, #32
+ sub x0, x0, w4, uxtw
+ ld1 {v3.16b}, [x0]
+
+.Lgcm_dec_tail_loop:
+ /* do decrypt */
+ ldrb w0, [x2], #1 /* get 1 byte from input */
+ umov w6, v0.b[0] /* get top crypted byte */
+ eor w6, w6, w0 /* w6 = CTR ^ input */
+ strb w6, [x1], #1 /* store out byte */
+
+ /* shift right out one byte */
+ ext v0.16b, v0.16b, v0.16b, #1
+ /* the last ciphertext is placed in high bytes */
+ ins v0.b[15], w0
+
+ subs w4, w4, #1
+ bne .Lgcm_dec_tail_loop
+
+ /* padding last block with zeros */
+ tbl v0.16b, {v0.16b}, v3.16b
+
+ /* ghash update */
+ rbit v0.16b, v0.16b
+ eor RHASH.16b, RHASH.16b, v0.16b
+ PMUL_128x128(RR0, RR1, RHASH, RH1, RTMP0, RTMP1)
+ REDUCTION(RHASH, RR0, RR1, RRCONST, RTMP2, RTMP3)
+
+.Lgcm_dec_hash_len:
+ cbz x7, .Lgcm_dec_end
+
+ GTAG_HASH_LENGTHS(v1, v3)
+
+ b .Lgcm_dec_ret
+
+.Lgcm_dec_end:
+ /* store new CTR */
+ rev x8, x8
+ rev x9, x9
+ stp x8, x9, [x3]
+
+ rbit RHASH.16b, RHASH.16b
+
+.Lgcm_dec_ret:
+ /* store new MAC */
+ st1 {RHASH.2d}, [x5]
+
+ ret
+SYM_FUNC_END(sm4_ce_pmull_gcm_dec)
+
+ .section ".rodata", "a"
+ .align 4
+.Lcts_permute_table:
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ .byte 0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7
+ .byte 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ .byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+
+.Lghash_rconst:
+ .quad 0x87
diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c
new file mode 100644
index 000000000000..e90ea0f17beb
--- /dev/null
+++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c
@@ -0,0 +1,286 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4-GCM AEAD Algorithm using ARMv8 Crypto Extensions
+ * as specified in rfc8998
+ * https://datatracker.ietf.org/doc/html/rfc8998
+ *
+ * Copyright (C) 2022 Tianjia Zhang <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/crypto.h>
+#include <linux/kernel.h>
+#include <linux/cpufeature.h>
+#include <asm/neon.h>
+#include <crypto/b128ops.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/sm4.h>
+#include "sm4-ce.h"
+
+asmlinkage void sm4_ce_pmull_ghash_setup(const u32 *rkey_enc, u8 *ghash_table);
+asmlinkage void pmull_ghash_update(const u8 *ghash_table, u8 *ghash,
+ const u8 *src, unsigned int nblocks);
+asmlinkage void sm4_ce_pmull_gcm_enc(const u32 *rkey_enc, u8 *dst,
+ const u8 *src, u8 *iv,
+ unsigned int nbytes, u8 *ghash,
+ const u8 *ghash_table, const u8 *lengths);
+asmlinkage void sm4_ce_pmull_gcm_dec(const u32 *rkey_enc, u8 *dst,
+ const u8 *src, u8 *iv,
+ unsigned int nbytes, u8 *ghash,
+ const u8 *ghash_table, const u8 *lengths);
+
+#define GHASH_BLOCK_SIZE 16
+#define GCM_IV_SIZE 12
+
+struct sm4_gcm_ctx {
+ struct sm4_ctx key;
+ u8 ghash_table[16 * 4];
+};
+
+
+static int gcm_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_gcm_ctx *ctx = crypto_aead_ctx(tfm);
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ kernel_neon_begin();
+
+ sm4_ce_expand_key(key, ctx->key.rkey_enc, ctx->key.rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+ sm4_ce_pmull_ghash_setup(ctx->key.rkey_enc, ctx->ghash_table);
+
+ kernel_neon_end();
+ return 0;
+}
+
+static int gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
+{
+ switch (authsize) {
+ case 4:
+ case 8:
+ case 12 ... 16:
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
+static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[])
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ struct sm4_gcm_ctx *ctx = crypto_aead_ctx(aead);
+ u8 __aligned(8) buffer[GHASH_BLOCK_SIZE];
+ u32 assoclen = req->assoclen;
+ struct scatter_walk walk;
+ unsigned int buflen = 0;
+
+ scatterwalk_start(&walk, req->src);
+
+ do {
+ u32 n = scatterwalk_clamp(&walk, assoclen);
+ u8 *p, *ptr;
+
+ if (!n) {
+ scatterwalk_start(&walk, sg_next(walk.sg));
+ n = scatterwalk_clamp(&walk, assoclen);
+ }
+
+ p = ptr = scatterwalk_map(&walk);
+ assoclen -= n;
+ scatterwalk_advance(&walk, n);
+
+ if (n + buflen < GHASH_BLOCK_SIZE) {
+ memcpy(&buffer[buflen], ptr, n);
+ buflen += n;
+ } else {
+ unsigned int nblocks;
+
+ if (buflen) {
+ unsigned int l = GHASH_BLOCK_SIZE - buflen;
+
+ memcpy(&buffer[buflen], ptr, l);
+ ptr += l;
+ n -= l;
+
+ pmull_ghash_update(ctx->ghash_table, ghash,
+ buffer, 1);
+ }
+
+ nblocks = n / GHASH_BLOCK_SIZE;
+ if (nblocks) {
+ pmull_ghash_update(ctx->ghash_table, ghash,
+ ptr, nblocks);
+ ptr += nblocks * GHASH_BLOCK_SIZE;
+ }
+
+ buflen = n % GHASH_BLOCK_SIZE;
+ if (buflen)
+ memcpy(&buffer[0], ptr, buflen);
+ }
+
+ scatterwalk_unmap(p);
+ scatterwalk_done(&walk, 0, assoclen);
+ } while (assoclen);
+
+ /* padding with '0' */
+ if (buflen) {
+ memset(&buffer[buflen], 0, GHASH_BLOCK_SIZE - buflen);
+ pmull_ghash_update(ctx->ghash_table, ghash, buffer, 1);
+ }
+}
+
+static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk,
+ struct sm4_gcm_ctx *ctx, u8 ghash[],
+ void (*sm4_ce_pmull_gcm_crypt)(const u32 *rkey_enc,
+ u8 *dst, const u8 *src, u8 *iv,
+ unsigned int nbytes, u8 *ghash,
+ const u8 *ghash_table, const u8 *lengths))
+{
+ u8 __aligned(8) iv[SM4_BLOCK_SIZE];
+ be128 __aligned(8) lengths;
+ int err;
+
+ memset(ghash, 0, SM4_BLOCK_SIZE);
+
+ lengths.a = cpu_to_be64(req->assoclen * 8);
+ lengths.b = cpu_to_be64(walk->total * 8);
+
+ memcpy(iv, walk->iv, GCM_IV_SIZE);
+ put_unaligned_be32(2, iv + GCM_IV_SIZE);
+
+ kernel_neon_begin();
+
+ if (req->assoclen)
+ gcm_calculate_auth_mac(req, ghash);
+
+ do {
+ unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE;
+ const u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+
+ if (walk->nbytes == walk->total) {
+ tail = 0;
+
+ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv,
+ walk->nbytes, ghash,
+ ctx->ghash_table,
+ (const u8 *)&lengths);
+ } else if (walk->nbytes - tail) {
+ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv,
+ walk->nbytes - tail, ghash,
+ ctx->ghash_table, NULL);
+ }
+
+ kernel_neon_end();
+
+ err = skcipher_walk_done(walk, tail);
+ if (err)
+ return err;
+ if (walk->nbytes)
+ kernel_neon_begin();
+ } while (walk->nbytes > 0);
+
+ return 0;
+}
+
+static int gcm_encrypt(struct aead_request *req)
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ struct sm4_gcm_ctx *ctx = crypto_aead_ctx(aead);
+ u8 __aligned(8) ghash[SM4_BLOCK_SIZE];
+ struct skcipher_walk walk;
+ int err;
+
+ err = skcipher_walk_aead_encrypt(&walk, req, false);
+ if (err)
+ return err;
+
+ err = gcm_crypt(req, &walk, ctx, ghash, sm4_ce_pmull_gcm_enc);
+ if (err)
+ return err;
+
+ /* copy authtag to end of dst */
+ scatterwalk_map_and_copy(ghash, req->dst, req->assoclen + req->cryptlen,
+ crypto_aead_authsize(aead), 1);
+
+ return 0;
+}
+
+static int gcm_decrypt(struct aead_request *req)
+{
+ struct crypto_aead *aead = crypto_aead_reqtfm(req);
+ unsigned int authsize = crypto_aead_authsize(aead);
+ struct sm4_gcm_ctx *ctx = crypto_aead_ctx(aead);
+ u8 __aligned(8) ghash[SM4_BLOCK_SIZE];
+ u8 authtag[SM4_BLOCK_SIZE];
+ struct skcipher_walk walk;
+ int err;
+
+ err = skcipher_walk_aead_decrypt(&walk, req, false);
+ if (err)
+ return err;
+
+ err = gcm_crypt(req, &walk, ctx, ghash, sm4_ce_pmull_gcm_dec);
+ if (err)
+ return err;
+
+ /* compare calculated auth tag with the stored one */
+ scatterwalk_map_and_copy(authtag, req->src,
+ req->assoclen + req->cryptlen - authsize,
+ authsize, 0);
+
+ if (crypto_memneq(authtag, ghash, authsize))
+ return -EBADMSG;
+
+ return 0;
+}
+
+static struct aead_alg sm4_gcm_alg = {
+ .base = {
+ .cra_name = "gcm(sm4)",
+ .cra_driver_name = "gcm-sm4-ce",
+ .cra_priority = 400,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct sm4_gcm_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .ivsize = GCM_IV_SIZE,
+ .chunksize = SM4_BLOCK_SIZE,
+ .maxauthsize = SM4_BLOCK_SIZE,
+ .setkey = gcm_setkey,
+ .setauthsize = gcm_setauthsize,
+ .encrypt = gcm_encrypt,
+ .decrypt = gcm_decrypt,
+};
+
+static int __init sm4_ce_gcm_init(void)
+{
+ if (!cpu_have_named_feature(PMULL))
+ return -ENODEV;
+
+ return crypto_register_aead(&sm4_gcm_alg);
+}
+
+static void __exit sm4_ce_gcm_exit(void)
+{
+ crypto_unregister_aead(&sm4_gcm_alg);
+}
+
+static const struct cpu_feature sm4_ce_gcm_cpu_feature[] = {
+ { cpu_feature(PMULL) },
+ {}
+};
+MODULE_DEVICE_TABLE(cpu, sm4_ce_gcm_cpu_feature);
+
+module_cpu_feature_match(SM4, sm4_ce_gcm_init);
+module_exit(sm4_ce_gcm_exit);
+
+MODULE_DESCRIPTION("Synchronous SM4 in GCM mode using ARMv8 Crypto Extensions");
+MODULE_ALIAS_CRYPTO("gcm(sm4)");
+MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
+MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 09:48:41

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH 16/16] crypto: arm64/sm4 - add ARMv9 SVE cryptography acceleration implementation

Scalable Vector Extension (SVE) is the next-generation SIMD extension for
arm64. SVE allows flexible vector length implementations with a range of
possible values in CPU implementations. The vector length can vary from a
minimum of 128 bits up to a maximum of 2048 bits, at 128-bit increments.
The SVE design guarantees that the same application can run on different
implementations that support SVE, without the need to recompile the code.

SVE was originally introduced by ARMv8, and ARMv9 introduced SVE2 to
expand and improve it. Similar to the Crypto Extension supported by the
NEON instruction set for the algorithm, SVE also supports the similar
instructions, called cryptography acceleration instructions, but this is
also optional instruction set.

This patch uses SM4 cryptography acceleration instructions and SVE2
instructions to optimize the SM4 algorithm for ECB/CBC/CFB/CTR modes.
Since the encryption of CBC/CFB cannot be parallelized, the Crypto
Extension instruction is used.

Since no test environment with a Vector Length (VL) greater than 128 bits
was found, the performance data was obtained on a machine with a VL is
128 bits, because this driver is enabled when the VL is greater than 128
bits, so this performance is only for reference. It can be seen from the
data that there is little difference between the data optimized by Crypto
Extension and SVE (VL=128 bits), and the optimization effect will be more
obvious when VL=256 bits or longer.

Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218 mode
of tcrypt, and compared with that optimized by Crypto Extension. The
abscissas are blocks of different lengths. The data is tabulated and the
unit is Mb/s:

sm4-ce | 16 64 128 256 1024 1420 4096
------------+--------------------------------------------------------------
ECB enc | 315.18 1162.65 1815.66 2553.50 3692.91 3727.20 4001.93
ECB dec | 316.06 1172.97 1817.81 2554.66 3692.18 3786.54 4001.93
CBC enc | 304.82 629.54 768.65 864.72 953.90 963.32 974.06
CBC dec | 306.05 1142.53 1805.11 2481.67 3522.06 3587.87 3790.99
CFB enc | 309.48 635.70 774.44 865.85 950.62 952.68 968.24
CFB dec | 315.98 1170.38 1828.75 2509.72 3543.63 3539.40 3793.25
CTR enc | 285.83 1036.59 1583.50 2147.26 2933.54 2954.66 3041.14
CTR dec | 285.29 1037.47 1584.67 2145.51 2934.10 2950.89 3041.62

sm4-sve-ce (VL = 128 bits)
ECB enc | 310.00 1154.70 1813.26 2579.74 3766.90 3869.45 4100.26
ECB dec | 315.60 1176.22 1838.06 2593.69 3774.95 3878.42 4098.83
CBC enc | 303.44 622.65 764.67 861.40 953.18 963.05 973.77
CBC dec | 302.13 1091.15 1689.10 2267.79 3182.84 3242.68 3408.92
CFB enc | 296.62 620.41 762.94 858.96 948.18 956.04 967.67
CFB dec | 291.23 1065.50 1637.33 2228.12 3158.52 3213.35 3403.83
CTR enc | 272.27 959.35 1466.34 1934.24 2562.80 2595.87 2695.15
CTR dec | 273.40 963.65 1471.83 1938.97 2563.12 2597.25 2694.54

Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/Kconfig | 19 +
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/sm4-sve-ce-core.S | 1028 +++++++++++++++++++++++++++
arch/arm64/crypto/sm4-sve-ce-glue.c | 332 +++++++++
4 files changed, 1382 insertions(+)
create mode 100644 arch/arm64/crypto/sm4-sve-ce-core.S
create mode 100644 arch/arm64/crypto/sm4-sve-ce-glue.c

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 6793d5bc3ee5..bbb5a7a08af5 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -249,6 +249,25 @@ config CRYPTO_SM4_ARM64_CE_BLK
- ARMv8 Crypto Extensions
- NEON (Advanced SIMD) extensions

+config CRYPTO_SM4_ARM64_SVE_CE_BLK
+ tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (ARMv9 cryptography acceleration with SVE2)"
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_SKCIPHER
+ select CRYPTO_SM4
+ select CRYPTO_SM4_ARM64_CE_BLK
+ help
+ Length-preserving ciphers: SM4 cipher algorithms (OSCCA GB/T 32907-2016)
+ with block cipher modes:
+ - ECB (Electronic Codebook) mode (NIST SP800-38A)
+ - CBC (Cipher Block Chaining) mode (NIST SP800-38A)
+ - CFB (Cipher Feedback) mode (NIST SP800-38A)
+ - CTR (Counter) mode (NIST SP800-38A)
+
+ Architecture: arm64 using:
+ - ARMv8 Crypto Extensions
+ - ARMv9 cryptography acceleration with SVE2
+ - NEON (Advanced SIMD) extensions
+
config CRYPTO_SM4_ARM64_NEON_BLK
tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (NEON)"
depends on KERNEL_MODE_NEON
diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
index 4818e204c2ac..355dd9053434 100644
--- a/arch/arm64/crypto/Makefile
+++ b/arch/arm64/crypto/Makefile
@@ -38,6 +38,9 @@ sm4-ce-gcm-y := sm4-ce-gcm-glue.o sm4-ce-gcm-core.o
obj-$(CONFIG_CRYPTO_SM4_ARM64_NEON_BLK) += sm4-neon.o
sm4-neon-y := sm4-neon-glue.o sm4-neon-core.o

+obj-$(CONFIG_CRYPTO_SM4_ARM64_SVE_CE_BLK) += sm4-sve-ce.o
+sm4-sve-ce-y := sm4-sve-ce-glue.o sm4-sve-ce-core.o
+
obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o
ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o

diff --git a/arch/arm64/crypto/sm4-sve-ce-core.S b/arch/arm64/crypto/sm4-sve-ce-core.S
new file mode 100644
index 000000000000..caecbdf2536c
--- /dev/null
+++ b/arch/arm64/crypto/sm4-sve-ce-core.S
@@ -0,0 +1,1028 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4 Cipher Algorithm for ARMv9 Crypto Extensions with SVE2
+ * as specified in
+ * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-10.html
+ *
+ * Copyright (C) 2022, Alibaba Group.
+ * Copyright (C) 2022 Tianjia Zhang <[email protected]>
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+.arch armv8-a+crypto+sve+sve2
+
+.irp b, 0, 15, 24, 25, 26, 27, 28, 29, 30, 31
+ .set .Lv\b\().4s, \b
+.endr
+
+.irp b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, \
+ 16, 24, 25, 26, 27, 28, 29, 30, 31
+ .set .Lz\b\().s, \b
+.endr
+
+.macro sm4e, vd, vn
+ .inst 0xcec08400 | (.L\vn << 5) | .L\vd
+.endm
+
+.macro sm4e_sve, zd, zn
+ .inst 0x4523e000 | (.L\zn << 5) | .L\zd
+.endm
+
+
+/* Register macros */
+
+#define RCTR z16
+#define RCTRv v16
+#define RIV z16
+#define RIVv v16
+#define RSWAP128 z17
+#define RZERO z18
+#define RLE128_INC z19
+
+#define RTMP0 z20
+#define RTMP0v v20
+#define RTMP1 z21
+#define RTMP2 z22
+#define RTMP3 z23
+
+
+/* Helper macros. */
+
+#define SM4_PREPARE(ptr) \
+ adr_l x7, .Lbswap128_mask; \
+ ptrue p0.b, ALL; \
+ rdvl x5, #1; \
+ ld1b {RSWAP128.b}, p0/z, [x7]; \
+ \
+ ld1 {v24.16b-v27.16b}, [ptr], #64; \
+ ld1 {v28.16b-v31.16b}, [ptr]; \
+ dup z24.q, z24.q[0]; \
+ dup z25.q, z25.q[0]; \
+ dup z26.q, z26.q[0]; \
+ dup z27.q, z27.q[0]; \
+ dup z28.q, z28.q[0]; \
+ dup z29.q, z29.q[0]; \
+ dup z30.q, z30.q[0]; \
+ dup z31.q, z31.q[0];
+
+#define SM4_SVE_CE_CRYPT_BLK(b0) \
+ revb b0.s, p0/m, b0.s; \
+ sm4e_sve b0.s, z24.s; \
+ sm4e_sve b0.s, z25.s; \
+ sm4e_sve b0.s, z26.s; \
+ sm4e_sve b0.s, z27.s; \
+ sm4e_sve b0.s, z28.s; \
+ sm4e_sve b0.s, z29.s; \
+ sm4e_sve b0.s, z30.s; \
+ sm4e_sve b0.s, z31.s; \
+ tbl b0.b, {b0.b}, RSWAP128.b; \
+ revb b0.s, p0/m, b0.s;
+
+#define SM4_SVE_CE_CRYPT_BLK4(b0, b1, b2, b3) \
+ revb b0.s, p0/m, b0.s; \
+ revb b1.s, p0/m, b1.s; \
+ revb b2.s, p0/m, b2.s; \
+ revb b3.s, p0/m, b3.s; \
+ sm4e_sve b0.s, z24.s; \
+ sm4e_sve b1.s, z24.s; \
+ sm4e_sve b2.s, z24.s; \
+ sm4e_sve b3.s, z24.s; \
+ sm4e_sve b0.s, z25.s; \
+ sm4e_sve b1.s, z25.s; \
+ sm4e_sve b2.s, z25.s; \
+ sm4e_sve b3.s, z25.s; \
+ sm4e_sve b0.s, z26.s; \
+ sm4e_sve b1.s, z26.s; \
+ sm4e_sve b2.s, z26.s; \
+ sm4e_sve b3.s, z26.s; \
+ sm4e_sve b0.s, z27.s; \
+ sm4e_sve b1.s, z27.s; \
+ sm4e_sve b2.s, z27.s; \
+ sm4e_sve b3.s, z27.s; \
+ sm4e_sve b0.s, z28.s; \
+ sm4e_sve b1.s, z28.s; \
+ sm4e_sve b2.s, z28.s; \
+ sm4e_sve b3.s, z28.s; \
+ sm4e_sve b0.s, z29.s; \
+ sm4e_sve b1.s, z29.s; \
+ sm4e_sve b2.s, z29.s; \
+ sm4e_sve b3.s, z29.s; \
+ sm4e_sve b0.s, z30.s; \
+ sm4e_sve b1.s, z30.s; \
+ sm4e_sve b2.s, z30.s; \
+ sm4e_sve b3.s, z30.s; \
+ sm4e_sve b0.s, z31.s; \
+ sm4e_sve b1.s, z31.s; \
+ sm4e_sve b2.s, z31.s; \
+ sm4e_sve b3.s, z31.s; \
+ tbl b0.b, {b0.b}, RSWAP128.b; \
+ tbl b1.b, {b1.b}, RSWAP128.b; \
+ tbl b2.b, {b2.b}, RSWAP128.b; \
+ tbl b3.b, {b3.b}, RSWAP128.b; \
+ revb b0.s, p0/m, b0.s; \
+ revb b1.s, p0/m, b1.s; \
+ revb b2.s, p0/m, b2.s; \
+ revb b3.s, p0/m, b3.s;
+
+#define SM4_SVE_CE_CRYPT_BLK8(b0, b1, b2, b3, b4, b5, b6, b7) \
+ revb b0.s, p0/m, b0.s; \
+ revb b1.s, p0/m, b1.s; \
+ revb b2.s, p0/m, b2.s; \
+ revb b3.s, p0/m, b3.s; \
+ revb b4.s, p0/m, b4.s; \
+ revb b5.s, p0/m, b5.s; \
+ revb b6.s, p0/m, b6.s; \
+ revb b7.s, p0/m, b7.s; \
+ sm4e_sve b0.s, z24.s; \
+ sm4e_sve b1.s, z24.s; \
+ sm4e_sve b2.s, z24.s; \
+ sm4e_sve b3.s, z24.s; \
+ sm4e_sve b4.s, z24.s; \
+ sm4e_sve b5.s, z24.s; \
+ sm4e_sve b6.s, z24.s; \
+ sm4e_sve b7.s, z24.s; \
+ sm4e_sve b0.s, z25.s; \
+ sm4e_sve b1.s, z25.s; \
+ sm4e_sve b2.s, z25.s; \
+ sm4e_sve b3.s, z25.s; \
+ sm4e_sve b4.s, z25.s; \
+ sm4e_sve b5.s, z25.s; \
+ sm4e_sve b6.s, z25.s; \
+ sm4e_sve b7.s, z25.s; \
+ sm4e_sve b0.s, z26.s; \
+ sm4e_sve b1.s, z26.s; \
+ sm4e_sve b2.s, z26.s; \
+ sm4e_sve b3.s, z26.s; \
+ sm4e_sve b4.s, z26.s; \
+ sm4e_sve b5.s, z26.s; \
+ sm4e_sve b6.s, z26.s; \
+ sm4e_sve b7.s, z26.s; \
+ sm4e_sve b0.s, z27.s; \
+ sm4e_sve b1.s, z27.s; \
+ sm4e_sve b2.s, z27.s; \
+ sm4e_sve b3.s, z27.s; \
+ sm4e_sve b4.s, z27.s; \
+ sm4e_sve b5.s, z27.s; \
+ sm4e_sve b6.s, z27.s; \
+ sm4e_sve b7.s, z27.s; \
+ sm4e_sve b0.s, z28.s; \
+ sm4e_sve b1.s, z28.s; \
+ sm4e_sve b2.s, z28.s; \
+ sm4e_sve b3.s, z28.s; \
+ sm4e_sve b4.s, z28.s; \
+ sm4e_sve b5.s, z28.s; \
+ sm4e_sve b6.s, z28.s; \
+ sm4e_sve b7.s, z28.s; \
+ sm4e_sve b0.s, z29.s; \
+ sm4e_sve b1.s, z29.s; \
+ sm4e_sve b2.s, z29.s; \
+ sm4e_sve b3.s, z29.s; \
+ sm4e_sve b4.s, z29.s; \
+ sm4e_sve b5.s, z29.s; \
+ sm4e_sve b6.s, z29.s; \
+ sm4e_sve b7.s, z29.s; \
+ sm4e_sve b0.s, z30.s; \
+ sm4e_sve b1.s, z30.s; \
+ sm4e_sve b2.s, z30.s; \
+ sm4e_sve b3.s, z30.s; \
+ sm4e_sve b4.s, z30.s; \
+ sm4e_sve b5.s, z30.s; \
+ sm4e_sve b6.s, z30.s; \
+ sm4e_sve b7.s, z30.s; \
+ sm4e_sve b0.s, z31.s; \
+ sm4e_sve b1.s, z31.s; \
+ sm4e_sve b2.s, z31.s; \
+ sm4e_sve b3.s, z31.s; \
+ sm4e_sve b4.s, z31.s; \
+ sm4e_sve b5.s, z31.s; \
+ sm4e_sve b6.s, z31.s; \
+ sm4e_sve b7.s, z31.s; \
+ tbl b0.b, {b0.b}, RSWAP128.b; \
+ tbl b1.b, {b1.b}, RSWAP128.b; \
+ tbl b2.b, {b2.b}, RSWAP128.b; \
+ tbl b3.b, {b3.b}, RSWAP128.b; \
+ tbl b4.b, {b4.b}, RSWAP128.b; \
+ tbl b5.b, {b5.b}, RSWAP128.b; \
+ tbl b6.b, {b6.b}, RSWAP128.b; \
+ tbl b7.b, {b7.b}, RSWAP128.b; \
+ revb b0.s, p0/m, b0.s; \
+ revb b1.s, p0/m, b1.s; \
+ revb b2.s, p0/m, b2.s; \
+ revb b3.s, p0/m, b3.s; \
+ revb b4.s, p0/m, b4.s; \
+ revb b5.s, p0/m, b5.s; \
+ revb b6.s, p0/m, b6.s; \
+ revb b7.s, p0/m, b7.s;
+
+#define SM4_CE_CRYPT_BLK(b0) \
+ rev32 b0.16b, b0.16b; \
+ sm4e b0.4s, v24.4s; \
+ sm4e b0.4s, v25.4s; \
+ sm4e b0.4s, v26.4s; \
+ sm4e b0.4s, v27.4s; \
+ sm4e b0.4s, v28.4s; \
+ sm4e b0.4s, v29.4s; \
+ sm4e b0.4s, v30.4s; \
+ sm4e b0.4s, v31.4s; \
+ rev64 b0.4s, b0.4s; \
+ ext b0.16b, b0.16b, b0.16b, #8; \
+ rev32 b0.16b, b0.16b;
+
+#define inc_le128(zctr) \
+ mov RCTRv.d[1], x8; \
+ mov RCTRv.d[0], x7; \
+ mov zctr.d, RLE128_INC.d; \
+ dup RCTR.q, RCTR.q[0]; \
+ adds x8, x8, x5, LSR #4; \
+ adclt zctr.d, RCTR.d, RZERO.d; \
+ adclt RCTR.d, zctr.d, RZERO.d; \
+ adc x7, x7, xzr; \
+ trn1 zctr.d, RCTR.d, zctr.d; \
+ revb zctr.d, p0/m, zctr.d;
+
+#define inc_le128_4x(zctr0, zctr1, zctr2, zctr3) \
+ mov v8.d[1], x8; \
+ mov v8.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr0.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v9.d[1], x8; \
+ mov v9.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr1.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v10.d[1], x8; \
+ mov v10.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr2.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v11.d[1], x8; \
+ mov v11.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr3.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ dup z8.q, z8.q[0]; \
+ dup z9.q, z9.q[0]; \
+ dup z10.q, z10.q[0]; \
+ dup z11.q, z11.q[0]; \
+ adclt zctr0.d, z8.d, RZERO.d; \
+ adclt zctr1.d, z9.d, RZERO.d; \
+ adclt zctr2.d, z10.d, RZERO.d; \
+ adclt zctr3.d, z11.d, RZERO.d; \
+ adclt z8.d, zctr0.d, RZERO.d; \
+ adclt z9.d, zctr1.d, RZERO.d; \
+ adclt z10.d, zctr2.d, RZERO.d; \
+ adclt z11.d, zctr3.d, RZERO.d; \
+ trn1 zctr0.d, z8.d, zctr0.d; \
+ trn1 zctr1.d, z9.d, zctr1.d; \
+ trn1 zctr2.d, z10.d, zctr2.d; \
+ trn1 zctr3.d, z11.d, zctr3.d; \
+ revb zctr0.d, p0/m, zctr0.d; \
+ revb zctr1.d, p0/m, zctr1.d; \
+ revb zctr2.d, p0/m, zctr2.d; \
+ revb zctr3.d, p0/m, zctr3.d;
+
+#define inc_le128_8x(zctr0, zctr1, zctr2, zctr3, \
+ zctr4, zctr5, zctr6, zctr7) \
+ mov v8.d[1], x8; \
+ mov v8.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr0.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v9.d[1], x8; \
+ mov v9.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr1.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v10.d[1], x8; \
+ mov v10.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr2.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v11.d[1], x8; \
+ mov v11.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr3.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v12.d[1], x8; \
+ mov v12.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr4.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v13.d[1], x8; \
+ mov v13.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr5.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v14.d[1], x8; \
+ mov v14.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr6.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ mov v15.d[1], x8; \
+ mov v15.d[0], x7; \
+ adds x8, x8, x5, LSR #4; \
+ mov zctr7.d, RLE128_INC.d; \
+ adc x7, x7, xzr; \
+ dup z8.q, z8.q[0]; \
+ dup z9.q, z9.q[0]; \
+ dup z10.q, z10.q[0]; \
+ dup z11.q, z11.q[0]; \
+ dup z12.q, z12.q[0]; \
+ dup z13.q, z13.q[0]; \
+ dup z14.q, z14.q[0]; \
+ dup z15.q, z15.q[0]; \
+ adclt zctr0.d, z8.d, RZERO.d; \
+ adclt zctr1.d, z9.d, RZERO.d; \
+ adclt zctr2.d, z10.d, RZERO.d; \
+ adclt zctr3.d, z11.d, RZERO.d; \
+ adclt zctr4.d, z12.d, RZERO.d; \
+ adclt zctr5.d, z13.d, RZERO.d; \
+ adclt zctr6.d, z14.d, RZERO.d; \
+ adclt zctr7.d, z15.d, RZERO.d; \
+ adclt z8.d, zctr0.d, RZERO.d; \
+ adclt z9.d, zctr1.d, RZERO.d; \
+ adclt z10.d, zctr2.d, RZERO.d; \
+ adclt z11.d, zctr3.d, RZERO.d; \
+ adclt z12.d, zctr4.d, RZERO.d; \
+ adclt z13.d, zctr5.d, RZERO.d; \
+ adclt z14.d, zctr6.d, RZERO.d; \
+ adclt z15.d, zctr7.d, RZERO.d; \
+ trn1 zctr0.d, z8.d, zctr0.d; \
+ trn1 zctr1.d, z9.d, zctr1.d; \
+ trn1 zctr2.d, z10.d, zctr2.d; \
+ trn1 zctr3.d, z11.d, zctr3.d; \
+ trn1 zctr4.d, z12.d, zctr4.d; \
+ trn1 zctr5.d, z13.d, zctr5.d; \
+ trn1 zctr6.d, z14.d, zctr6.d; \
+ trn1 zctr7.d, z15.d, zctr7.d; \
+ revb zctr0.d, p0/m, zctr0.d; \
+ revb zctr1.d, p0/m, zctr1.d; \
+ revb zctr2.d, p0/m, zctr2.d; \
+ revb zctr3.d, p0/m, zctr3.d; \
+ revb zctr4.d, p0/m, zctr4.d; \
+ revb zctr5.d, p0/m, zctr5.d; \
+ revb zctr6.d, p0/m, zctr6.d; \
+ revb zctr7.d, p0/m, zctr7.d;
+
+
+.align 3
+SYM_FUNC_START(sm4_sve_ce_crypt)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * w3: nblocks
+ */
+ uxtw x3, w3
+ SM4_PREPARE(x0)
+
+.Lcrypt_loop_8x:
+ sub x3, x3, x5, LSR #1 /* x3 - (8 * VL) */
+ tbnz x3, #63, .Lcrypt_4x
+
+ ld1b {z0.b}, p0/z, [x2]
+ ld1b {z1.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z2.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z3.b}, p0/z, [x2, #3, MUL VL]
+ ld1b {z4.b}, p0/z, [x2, #4, MUL VL]
+ ld1b {z5.b}, p0/z, [x2, #5, MUL VL]
+ ld1b {z6.b}, p0/z, [x2, #6, MUL VL]
+ ld1b {z7.b}, p0/z, [x2, #7, MUL VL]
+
+ SM4_SVE_CE_CRYPT_BLK8(z0, z1, z2, z3, z4, z5, z6, z7)
+
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+ st1b {z4.b}, p0, [x1, #4, MUL VL]
+ st1b {z5.b}, p0, [x1, #5, MUL VL]
+ st1b {z6.b}, p0, [x1, #6, MUL VL]
+ st1b {z7.b}, p0, [x1, #7, MUL VL]
+
+ addvl x2, x2, #8
+ addvl x1, x1, #8
+
+ cbz x3, .Lcrypt_end
+ b .Lcrypt_loop_8x
+
+.Lcrypt_4x:
+ add x3, x3, x5, LSR #1
+ cmp x3, x5, LSR #2
+ blt .Lcrypt_loop_1x
+
+ sub x3, x3, x5, LSR #2 /* x3 - (4 * VL) */
+
+ ld1b {z0.b}, p0/z, [x2]
+ ld1b {z1.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z2.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z3.b}, p0/z, [x2, #3, MUL VL]
+
+ SM4_SVE_CE_CRYPT_BLK4(z0, z1, z2, z3)
+
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+
+ addvl x2, x2, #4
+ addvl x1, x1, #4
+
+ cbz x3, .Lcrypt_end
+
+.Lcrypt_loop_1x:
+ cmp x3, x5, LSR #4
+ blt .Lcrypt_ce_loop_1x
+
+ sub x3, x3, x5, LSR #4 /* x3 - VL */
+
+ ld1b {z0.b}, p0/z, [x2]
+
+ SM4_SVE_CE_CRYPT_BLK(z0)
+
+ st1b {z0.b}, p0, [x1]
+
+ addvl x2, x2, #1
+ addvl x1, x1, #1
+
+ cbz x3, .Lcrypt_end
+ b .Lcrypt_loop_1x
+
+.Lcrypt_ce_loop_1x:
+ sub x3, x3, #1
+
+ ld1 {v0.16b}, [x2], #16
+ SM4_CE_CRYPT_BLK(v0)
+ st1 {v0.16b}, [x1], #16
+
+ cbnz x3, .Lcrypt_ce_loop_1x
+
+.Lcrypt_end:
+ ret
+SYM_FUNC_END(sm4_sve_ce_crypt)
+
+.align 3
+SYM_FUNC_START(sm4_sve_ce_cbc_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: iv (big endian, 128 bit)
+ * w4: nblocks
+ */
+ uxtw x4, w4
+ SM4_PREPARE(x0)
+
+ ld1 {RIVv.16b}, [x3]
+ ext RIV.b, RIV.b, RIV.b, #16
+
+.Lcbc_dec_loop_8x:
+ sub x4, x4, x5, LSR #1 /* x4 - (8 * VL) */
+ tbnz x4, #63, .Lcbc_dec_4x
+
+ ld1b {z15.b}, p0/z, [x2]
+ ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
+ ld1b {z11.b}, p0/z, [x2, #4, MUL VL]
+ ld1b {z10.b}, p0/z, [x2, #5, MUL VL]
+ ld1b {z9.b}, p0/z, [x2, #6, MUL VL]
+ ld1b {z8.b}, p0/z, [x2, #7, MUL VL]
+ rev z0.b, z15.b
+ rev z1.b, z14.b
+ rev z2.b, z13.b
+ rev z3.b, z12.b
+ rev z4.b, z11.b
+ rev z5.b, z10.b
+ rev z6.b, z9.b
+ rev z7.b, z8.b
+ rev RTMP0.b, RIV.b
+ ext z7.b, z7.b, z6.b, #16
+ ext z6.b, z6.b, z5.b, #16
+ ext z5.b, z5.b, z4.b, #16
+ ext z4.b, z4.b, z3.b, #16
+ ext z3.b, z3.b, z2.b, #16
+ ext z2.b, z2.b, z1.b, #16
+ ext z1.b, z1.b, z0.b, #16
+ ext z0.b, z0.b, RTMP0.b, #16
+ rev z7.b, z7.b
+ rev z6.b, z6.b
+ rev z5.b, z5.b
+ rev z4.b, z4.b
+ rev z3.b, z3.b
+ rev z2.b, z2.b
+ rev z1.b, z1.b
+ rev z0.b, z0.b
+ mov RIV.d, z8.d
+
+ SM4_SVE_CE_CRYPT_BLK8(z15, z14, z13, z12, z11, z10, z9, z8)
+
+ eor z0.d, z0.d, z15.d
+ eor z1.d, z1.d, z14.d
+ eor z2.d, z2.d, z13.d
+ eor z3.d, z3.d, z12.d
+ eor z4.d, z4.d, z11.d
+ eor z5.d, z5.d, z10.d
+ eor z6.d, z6.d, z9.d
+ eor z7.d, z7.d, z8.d
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+ st1b {z4.b}, p0, [x1, #4, MUL VL]
+ st1b {z5.b}, p0, [x1, #5, MUL VL]
+ st1b {z6.b}, p0, [x1, #6, MUL VL]
+ st1b {z7.b}, p0, [x1, #7, MUL VL]
+
+ addvl x2, x2, #8
+ addvl x1, x1, #8
+
+ cbz x4, .Lcbc_dec_end
+ b .Lcbc_dec_loop_8x
+
+.Lcbc_dec_4x:
+ add x4, x4, x5, LSR #1
+ cmp x4, x5, LSR #2
+ blt .Lcbc_dec_loop_1x
+
+ sub x4, x4, x5, LSR #2 /* x4 - (4 * VL) */
+
+ ld1b {z15.b}, p0/z, [x2]
+ ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
+ rev z0.b, z15.b
+ rev z1.b, z14.b
+ rev z2.b, z13.b
+ rev z3.b, z12.b
+ rev RTMP0.b, RIV.b
+ ext z3.b, z3.b, z2.b, #16
+ ext z2.b, z2.b, z1.b, #16
+ ext z1.b, z1.b, z0.b, #16
+ ext z0.b, z0.b, RTMP0.b, #16
+ rev z3.b, z3.b
+ rev z2.b, z2.b
+ rev z1.b, z1.b
+ rev z0.b, z0.b
+ mov RIV.d, z12.d
+
+ SM4_SVE_CE_CRYPT_BLK4(z15, z14, z13, z12)
+
+ eor z0.d, z0.d, z15.d
+ eor z1.d, z1.d, z14.d
+ eor z2.d, z2.d, z13.d
+ eor z3.d, z3.d, z12.d
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+
+ addvl x2, x2, #4
+ addvl x1, x1, #4
+
+ cbz x4, .Lcbc_dec_end
+
+.Lcbc_dec_loop_1x:
+ cmp x4, x5, LSR #4
+ blt .Lcbc_dec_ce
+
+ sub x4, x4, x5, LSR #4 /* x4 - VL */
+
+ ld1b {z15.b}, p0/z, [x2]
+ rev RTMP0.b, RIV.b
+ rev z0.b, z15.b
+ ext z0.b, z0.b, RTMP0.b, #16
+ rev z0.b, z0.b
+ mov RIV.d, z15.d
+
+ SM4_SVE_CE_CRYPT_BLK(z15)
+
+ eor z0.d, z0.d, z15.d
+ st1b {z0.b}, p0, [x1]
+
+ addvl x2, x2, #1
+ addvl x1, x1, #1
+
+ cbz x4, .Lcbc_dec_end
+ b .Lcbc_dec_loop_1x
+
+.Lcbc_dec_ce:
+ rev RIV.s, RIV.s
+ tbl RIV.b, {RIV.b}, RSWAP128.b
+
+.Lcbc_dec_ce_loop_1x:
+ sub x4, x4, #1
+
+ ld1 {v15.16b}, [x2], #16
+ mov v0.16b, RIVv.16b
+ mov RIVv.16b, v15.16b
+ SM4_CE_CRYPT_BLK(v15)
+ eor v0.16b, v0.16b, v15.16b
+ st1 {v0.16b}, [x1], #16
+
+ cbnz x4, .Lcbc_dec_ce_loop_1x
+
+ ext RIV.b, RIV.b, RIV.b, #16
+
+.Lcbc_dec_end:
+ /* store new IV */
+ rev RIV.s, RIV.s
+ tbl RIV.b, {RIV.b}, RSWAP128.b
+ st1 {RIVv.16b}, [x3]
+
+ ret
+SYM_FUNC_END(sm4_sve_ce_cbc_dec)
+
+.align 3
+SYM_FUNC_START(sm4_sve_ce_cfb_dec)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: iv (big endian, 128 bit)
+ * w4: nblocks
+ */
+ uxtw x4, w4
+ SM4_PREPARE(x0)
+
+ ld1 {RIVv.16b}, [x3]
+ ext RIV.b, RIV.b, RIV.b, #16
+
+.Lcfb_dec_loop_8x:
+ sub x4, x4, x5, LSR #1 /* x4 - (8 * VL) */
+ tbnz x4, #63, .Lcfb_dec_4x
+
+ ld1b {z15.b}, p0/z, [x2]
+ ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
+ ld1b {z11.b}, p0/z, [x2, #4, MUL VL]
+ ld1b {z10.b}, p0/z, [x2, #5, MUL VL]
+ ld1b {z9.b}, p0/z, [x2, #6, MUL VL]
+ ld1b {z8.b}, p0/z, [x2, #7, MUL VL]
+ rev z0.b, z15.b
+ rev z1.b, z14.b
+ rev z2.b, z13.b
+ rev z3.b, z12.b
+ rev z4.b, z11.b
+ rev z5.b, z10.b
+ rev z6.b, z9.b
+ rev z7.b, z8.b
+ rev RTMP0.b, RIV.b
+ ext z7.b, z7.b, z6.b, #16
+ ext z6.b, z6.b, z5.b, #16
+ ext z5.b, z5.b, z4.b, #16
+ ext z4.b, z4.b, z3.b, #16
+ ext z3.b, z3.b, z2.b, #16
+ ext z2.b, z2.b, z1.b, #16
+ ext z1.b, z1.b, z0.b, #16
+ ext z0.b, z0.b, RTMP0.b, #16
+ rev z7.b, z7.b
+ rev z6.b, z6.b
+ rev z5.b, z5.b
+ rev z4.b, z4.b
+ rev z3.b, z3.b
+ rev z2.b, z2.b
+ rev z1.b, z1.b
+ rev z0.b, z0.b
+ mov RIV.d, z8.d
+
+ SM4_SVE_CE_CRYPT_BLK8(z0, z1, z2, z3, z4, z5, z6, z7)
+
+ eor z0.d, z0.d, z15.d
+ eor z1.d, z1.d, z14.d
+ eor z2.d, z2.d, z13.d
+ eor z3.d, z3.d, z12.d
+ eor z4.d, z4.d, z11.d
+ eor z5.d, z5.d, z10.d
+ eor z6.d, z6.d, z9.d
+ eor z7.d, z7.d, z8.d
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+ st1b {z4.b}, p0, [x1, #4, MUL VL]
+ st1b {z5.b}, p0, [x1, #5, MUL VL]
+ st1b {z6.b}, p0, [x1, #6, MUL VL]
+ st1b {z7.b}, p0, [x1, #7, MUL VL]
+
+ addvl x2, x2, #8
+ addvl x1, x1, #8
+
+ cbz x4, .Lcfb_dec_end
+ b .Lcfb_dec_loop_8x
+
+.Lcfb_dec_4x:
+ add x4, x4, x5, LSR #1
+ cmp x4, x5, LSR #2
+ blt .Lcfb_dec_loop_1x
+
+ sub x4, x4, x5, LSR #2 /* x4 - (4 * VL) */
+
+ ld1b {z15.b}, p0/z, [x2]
+ ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
+ rev z0.b, z15.b
+ rev z1.b, z14.b
+ rev z2.b, z13.b
+ rev z3.b, z12.b
+ rev RTMP0.b, RIV.b
+ ext z3.b, z3.b, z2.b, #16
+ ext z2.b, z2.b, z1.b, #16
+ ext z1.b, z1.b, z0.b, #16
+ ext z0.b, z0.b, RTMP0.b, #16
+ rev z3.b, z3.b
+ rev z2.b, z2.b
+ rev z1.b, z1.b
+ rev z0.b, z0.b
+ mov RIV.d, z12.d
+
+ SM4_SVE_CE_CRYPT_BLK4(z0, z1, z2, z3)
+
+ eor z0.d, z0.d, z15.d
+ eor z1.d, z1.d, z14.d
+ eor z2.d, z2.d, z13.d
+ eor z3.d, z3.d, z12.d
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+
+ addvl x2, x2, #4
+ addvl x1, x1, #4
+
+ cbz x4, .Lcfb_dec_end
+
+.Lcfb_dec_loop_1x:
+ cmp x4, x5, LSR #4
+ blt .Lcfb_dec_ce
+
+ sub x4, x4, x5, LSR #4 /* x4 - VL */
+
+ ld1b {z15.b}, p0/z, [x2]
+ rev RTMP0.b, RIV.b
+ rev z0.b, z15.b
+ ext z0.b, z0.b, RTMP0.b, #16
+ rev z0.b, z0.b
+ mov RIV.d, z15.d
+
+ SM4_SVE_CE_CRYPT_BLK(z0)
+
+ eor z0.d, z0.d, z15.d
+ st1b {z0.b}, p0, [x1]
+
+ addvl x2, x2, #1
+ addvl x1, x1, #1
+
+ cbz x4, .Lcfb_dec_end
+ b .Lcfb_dec_loop_1x
+
+.Lcfb_dec_ce:
+ rev RIV.s, RIV.s
+ tbl RIV.b, {RIV.b}, RSWAP128.b
+
+.Lcfb_dec_ce_loop_1x:
+ sub x4, x4, #1
+
+ ld1 {v15.16b}, [x2], #16
+ mov v0.16b, RIVv.16b
+ mov RIVv.16b, v15.16b
+ SM4_CE_CRYPT_BLK(v0)
+ eor v0.16b, v0.16b, v15.16b
+ st1 {v0.16b}, [x1], #16
+
+ cbnz x4, .Lcfb_dec_ce_loop_1x
+
+ ext RIV.b, RIV.b, RIV.b, #16
+
+.Lcfb_dec_end:
+ /* store new IV */
+ rev RIV.s, RIV.s
+ tbl RIV.b, {RIV.b}, RSWAP128.b
+ st1 {RIVv.16b}, [x3]
+
+ ret
+SYM_FUNC_END(sm4_sve_ce_cfb_dec)
+
+.align 3
+SYM_FUNC_START(sm4_sve_ce_ctr_crypt)
+ /* input:
+ * x0: round key array, CTX
+ * x1: dst
+ * x2: src
+ * x3: ctr (big endian, 128 bit)
+ * w4: nblocks
+ */
+ uxtw x4, w4
+ SM4_PREPARE(x0)
+
+ dup RZERO.d, #0
+ adr_l x6, .Lle128_inc
+ ld1b {RLE128_INC.b}, p0/z, [x6]
+
+ ldp x7, x8, [x3]
+ rev x7, x7
+ rev x8, x8
+
+.Lctr_loop_8x:
+ sub x4, x4, x5, LSR #1 /* x4 - (8 * VL) */
+ tbnz x4, #63, .Lctr_4x
+
+ inc_le128_8x(z0, z1, z2, z3, z4, z5, z6, z7)
+
+ ld1b {z8.b}, p0/z, [x2]
+ ld1b {z9.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z10.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z11.b}, p0/z, [x2, #3, MUL VL]
+ ld1b {z12.b}, p0/z, [x2, #4, MUL VL]
+ ld1b {z13.b}, p0/z, [x2, #5, MUL VL]
+ ld1b {z14.b}, p0/z, [x2, #6, MUL VL]
+ ld1b {z15.b}, p0/z, [x2, #7, MUL VL]
+
+ SM4_SVE_CE_CRYPT_BLK8(z0, z1, z2, z3, z4, z5, z6, z7)
+
+ eor z0.d, z0.d, z8.d
+ eor z1.d, z1.d, z9.d
+ eor z2.d, z2.d, z10.d
+ eor z3.d, z3.d, z11.d
+ eor z4.d, z4.d, z12.d
+ eor z5.d, z5.d, z13.d
+ eor z6.d, z6.d, z14.d
+ eor z7.d, z7.d, z15.d
+
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+ st1b {z4.b}, p0, [x1, #4, MUL VL]
+ st1b {z5.b}, p0, [x1, #5, MUL VL]
+ st1b {z6.b}, p0, [x1, #6, MUL VL]
+ st1b {z7.b}, p0, [x1, #7, MUL VL]
+
+ addvl x2, x2, #8
+ addvl x1, x1, #8
+
+ cbz x4, .Lctr_end
+ b .Lctr_loop_8x
+
+.Lctr_4x:
+ add x4, x4, x5, LSR #1
+ cmp x4, x5, LSR #2
+ blt .Lctr_loop_1x
+
+ sub x4, x4, x5, LSR #2 /* x4 - (4 * VL) */
+
+ inc_le128_4x(z0, z1, z2, z3)
+
+ ld1b {z8.b}, p0/z, [x2]
+ ld1b {z9.b}, p0/z, [x2, #1, MUL VL]
+ ld1b {z10.b}, p0/z, [x2, #2, MUL VL]
+ ld1b {z11.b}, p0/z, [x2, #3, MUL VL]
+
+ SM4_SVE_CE_CRYPT_BLK4(z0, z1, z2, z3)
+
+ eor z0.d, z0.d, z8.d
+ eor z1.d, z1.d, z9.d
+ eor z2.d, z2.d, z10.d
+ eor z3.d, z3.d, z11.d
+
+ st1b {z0.b}, p0, [x1]
+ st1b {z1.b}, p0, [x1, #1, MUL VL]
+ st1b {z2.b}, p0, [x1, #2, MUL VL]
+ st1b {z3.b}, p0, [x1, #3, MUL VL]
+
+ addvl x2, x2, #4
+ addvl x1, x1, #4
+
+ cbz x4, .Lctr_end
+
+.Lctr_loop_1x:
+ cmp x4, x5, LSR #4
+ blt .Lctr_ce_loop_1x
+
+ sub x4, x4, x5, LSR #4 /* x4 - VL */
+
+ inc_le128(z0)
+ ld1b {z8.b}, p0/z, [x2]
+
+ SM4_SVE_CE_CRYPT_BLK(z0)
+
+ eor z0.d, z0.d, z8.d
+ st1b {z0.b}, p0, [x1]
+
+ addvl x2, x2, #1
+ addvl x1, x1, #1
+
+ cbz x4, .Lctr_end
+ b .Lctr_loop_1x
+
+.Lctr_ce_loop_1x:
+ sub x4, x4, #1
+
+ /* inc_le128 for CE */
+ mov v0.d[1], x8
+ mov v0.d[0], x7
+ adds x8, x8, #1
+ rev64 v0.16b, v0.16b
+ adc x7, x7, xzr
+
+ ld1 {v8.16b}, [x2], #16
+
+ SM4_CE_CRYPT_BLK(v0)
+
+ eor v0.16b, v0.16b, v8.16b
+ st1 {v0.16b}, [x1], #16
+
+ cbnz x4, .Lctr_ce_loop_1x
+
+.Lctr_end:
+ /* store new CTR */
+ rev x7, x7
+ rev x8, x8
+ stp x7, x8, [x3]
+
+ ret
+SYM_FUNC_END(sm4_sve_ce_ctr_crypt)
+
+.align 3
+SYM_FUNC_START(sm4_sve_get_vl)
+ /* VL in bytes */
+ rdvl x0, #1
+
+ ret
+SYM_FUNC_END(sm4_sve_get_vl)
+
+
+ .section ".rodata", "a"
+ .align 4
+.Lbswap128_mask:
+ .byte 0x0c, 0x0d, 0x0e, 0x0f, 0x08, 0x09, 0x0a, 0x0b
+ .byte 0x04, 0x05, 0x06, 0x07, 0x00, 0x01, 0x02, 0x03
+ .byte 0x1c, 0x1d, 0x1e, 0x1f, 0x18, 0x19, 0x1a, 0x1b
+ .byte 0x14, 0x15, 0x16, 0x17, 0x10, 0x11, 0x12, 0x13
+ .byte 0x2c, 0x2d, 0x2e, 0x2f, 0x28, 0x29, 0x2a, 0x2b
+ .byte 0x24, 0x25, 0x26, 0x27, 0x20, 0x21, 0x22, 0x23
+ .byte 0x3c, 0x3d, 0x3e, 0x3f, 0x38, 0x39, 0x3a, 0x3b
+ .byte 0x34, 0x35, 0x36, 0x37, 0x30, 0x31, 0x32, 0x33
+ .byte 0x4c, 0x4d, 0x4e, 0x4f, 0x48, 0x49, 0x4a, 0x4b
+ .byte 0x44, 0x45, 0x46, 0x47, 0x40, 0x41, 0x42, 0x43
+ .byte 0x5c, 0x5d, 0x5e, 0x5f, 0x58, 0x59, 0x5a, 0x5b
+ .byte 0x54, 0x55, 0x56, 0x57, 0x50, 0x51, 0x52, 0x53
+ .byte 0x6c, 0x6d, 0x6e, 0x6f, 0x68, 0x69, 0x6a, 0x6b
+ .byte 0x64, 0x65, 0x66, 0x67, 0x60, 0x61, 0x62, 0x63
+ .byte 0x7c, 0x7d, 0x7e, 0x7f, 0x78, 0x79, 0x7a, 0x7b
+ .byte 0x74, 0x75, 0x76, 0x77, 0x70, 0x71, 0x72, 0x73
+ .byte 0x8c, 0x8d, 0x8e, 0x8f, 0x88, 0x89, 0x8a, 0x8b
+ .byte 0x84, 0x85, 0x86, 0x87, 0x80, 0x81, 0x82, 0x83
+ .byte 0x9c, 0x9d, 0x9e, 0x9f, 0x98, 0x99, 0x9a, 0x9b
+ .byte 0x94, 0x95, 0x96, 0x97, 0x90, 0x91, 0x92, 0x93
+ .byte 0xac, 0xad, 0xae, 0xaf, 0xa8, 0xa9, 0xaa, 0xab
+ .byte 0xa4, 0xa5, 0xa6, 0xa7, 0xa0, 0xa1, 0xa2, 0xa3
+ .byte 0xbc, 0xbd, 0xbe, 0xbf, 0xb8, 0xb9, 0xba, 0xbb
+ .byte 0xb4, 0xb5, 0xb6, 0xb7, 0xb0, 0xb1, 0xb2, 0xb3
+ .byte 0xcc, 0xcd, 0xce, 0xcf, 0xc8, 0xc9, 0xca, 0xcb
+ .byte 0xc4, 0xc5, 0xc6, 0xc7, 0xc0, 0xc1, 0xc2, 0xc3
+ .byte 0xdc, 0xdd, 0xde, 0xdf, 0xd8, 0xd9, 0xda, 0xdb
+ .byte 0xd4, 0xd5, 0xd6, 0xd7, 0xd0, 0xd1, 0xd2, 0xd3
+ .byte 0xec, 0xed, 0xee, 0xef, 0xe8, 0xe9, 0xea, 0xeb
+ .byte 0xe4, 0xe5, 0xe6, 0xe7, 0xe0, 0xe1, 0xe2, 0xe3
+ .byte 0xfc, 0xfd, 0xfe, 0xff, 0xf8, 0xf9, 0xfa, 0xfb
+ .byte 0xf4, 0xf5, 0xf6, 0xf7, 0xf0, 0xf1, 0xf2, 0xf3
+
+.Lle128_inc:
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x0d, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x0f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
diff --git a/arch/arm64/crypto/sm4-sve-ce-glue.c b/arch/arm64/crypto/sm4-sve-ce-glue.c
new file mode 100644
index 000000000000..fc797b72b5f0
--- /dev/null
+++ b/arch/arm64/crypto/sm4-sve-ce-glue.c
@@ -0,0 +1,332 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4 Cipher Algorithm, using ARMv9 Crypto Extensions with SVE2
+ * as specified in
+ * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-10.html
+ *
+ * Copyright (C) 2022, Alibaba Group.
+ * Copyright (C) 2022 Tianjia Zhang <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/crypto.h>
+#include <linux/kernel.h>
+#include <linux/cpufeature.h>
+#include <asm/neon.h>
+#include <asm/simd.h>
+#include <crypto/internal/simd.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/sm4.h>
+#include "sm4-ce.h"
+
+asmlinkage void sm4_sve_ce_crypt(const u32 *rkey, u8 *dst,
+ const u8 *src, unsigned int nblocks);
+asmlinkage void sm4_sve_ce_cbc_dec(const u32 *rkey_dec, u8 *dst,
+ const u8 *src, u8 *iv,
+ unsigned int nblocks);
+asmlinkage void sm4_sve_ce_cfb_dec(const u32 *rkey_enc, u8 *dst,
+ const u8 *src, u8 *iv,
+ unsigned int nblocks);
+asmlinkage void sm4_sve_ce_ctr_crypt(const u32 *rkey_enc, u8 *dst,
+ const u8 *src, u8 *iv,
+ unsigned int nblocks);
+asmlinkage unsigned int sm4_sve_get_vl(void);
+
+
+static int sm4_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ if (key_len != SM4_KEY_SIZE)
+ return -EINVAL;
+
+ kernel_neon_begin();
+ sm4_ce_expand_key(key, ctx->rkey_enc, ctx->rkey_dec,
+ crypto_sm4_fk, crypto_sm4_ck);
+ kernel_neon_end();
+
+ return 0;
+}
+
+static int ecb_crypt(struct skcipher_request *req, const u32 *rkey)
+{
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ while ((nbytes = walk.nbytes) > 0) {
+ const u8 *src = walk.src.virt.addr;
+ u8 *dst = walk.dst.virt.addr;
+ unsigned int nblocks;
+
+ nblocks = nbytes / SM4_BLOCK_SIZE;
+ if (nblocks) {
+ kernel_neon_begin();
+
+ sm4_sve_ce_crypt(rkey, dst, src, nblocks);
+
+ kernel_neon_end();
+ }
+
+ err = skcipher_walk_done(&walk, nbytes % SM4_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
+static int ecb_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ return ecb_crypt(req, ctx->rkey_enc);
+}
+
+static int ecb_decrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ return ecb_crypt(req, ctx->rkey_dec);
+}
+
+static int cbc_crypt(struct skcipher_request *req, const u32 *rkey,
+ void (*sm4_cbc_crypt)(const u32 *rkey, u8 *dst,
+ const u8 *src, u8 *iv, unsigned int nblocks))
+{
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ while ((nbytes = walk.nbytes) > 0) {
+ const u8 *src = walk.src.virt.addr;
+ u8 *dst = walk.dst.virt.addr;
+ unsigned int nblocks;
+
+ nblocks = nbytes / SM4_BLOCK_SIZE;
+ if (nblocks) {
+ kernel_neon_begin();
+
+ sm4_cbc_crypt(rkey, dst, src, walk.iv, nblocks);
+
+ kernel_neon_end();
+ }
+
+ err = skcipher_walk_done(&walk, nbytes % SM4_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
+static int cbc_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ return cbc_crypt(req, ctx->rkey_enc, sm4_ce_cbc_enc);
+}
+
+static int cbc_decrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ return cbc_crypt(req, ctx->rkey_dec, sm4_sve_ce_cbc_dec);
+}
+
+static int cfb_crypt(struct skcipher_request *req,
+ void (*sm4_cfb_crypt)(const u32 *rkey, u8 *dst,
+ const u8 *src, u8 *iv, unsigned int nblocks))
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ while ((nbytes = walk.nbytes) > 0) {
+ const u8 *src = walk.src.virt.addr;
+ u8 *dst = walk.dst.virt.addr;
+ unsigned int nblocks;
+
+ nblocks = nbytes / SM4_BLOCK_SIZE;
+ if (nblocks) {
+ kernel_neon_begin();
+
+ sm4_cfb_crypt(ctx->rkey_enc, dst, src,
+ walk.iv, nblocks);
+
+ kernel_neon_end();
+
+ dst += nblocks * SM4_BLOCK_SIZE;
+ src += nblocks * SM4_BLOCK_SIZE;
+ nbytes -= nblocks * SM4_BLOCK_SIZE;
+ }
+
+ /* tail */
+ if (walk.nbytes == walk.total && nbytes > 0) {
+ u8 keystream[SM4_BLOCK_SIZE];
+
+ sm4_ce_crypt_block(ctx->rkey_enc, keystream, walk.iv);
+ crypto_xor_cpy(dst, src, keystream, nbytes);
+ nbytes = 0;
+ }
+
+ err = skcipher_walk_done(&walk, nbytes);
+ }
+
+ return err;
+}
+
+static int cfb_encrypt(struct skcipher_request *req)
+{
+ return cfb_crypt(req, sm4_ce_cfb_enc);
+}
+
+static int cfb_decrypt(struct skcipher_request *req)
+{
+ return cfb_crypt(req, sm4_sve_ce_cfb_dec);
+}
+
+static int ctr_crypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ while ((nbytes = walk.nbytes) > 0) {
+ const u8 *src = walk.src.virt.addr;
+ u8 *dst = walk.dst.virt.addr;
+ unsigned int nblocks;
+
+ nblocks = nbytes / SM4_BLOCK_SIZE;
+ if (nblocks) {
+ kernel_neon_begin();
+
+ sm4_sve_ce_ctr_crypt(ctx->rkey_enc, dst, src,
+ walk.iv, nblocks);
+
+ kernel_neon_end();
+
+ dst += nblocks * SM4_BLOCK_SIZE;
+ src += nblocks * SM4_BLOCK_SIZE;
+ nbytes -= nblocks * SM4_BLOCK_SIZE;
+ }
+
+ /* tail */
+ if (walk.nbytes == walk.total && nbytes > 0) {
+ u8 keystream[SM4_BLOCK_SIZE];
+
+ sm4_ce_crypt_block(ctx->rkey_enc, keystream, walk.iv);
+ crypto_inc(walk.iv, SM4_BLOCK_SIZE);
+ crypto_xor_cpy(dst, src, keystream, nbytes);
+ nbytes = 0;
+ }
+
+ err = skcipher_walk_done(&walk, nbytes);
+ }
+
+ return err;
+}
+
+static struct skcipher_alg sm4_algs[] = {
+ {
+ .base = {
+ .cra_name = "ecb(sm4)",
+ .cra_driver_name = "ecb-sm4-sve-ce",
+ .cra_priority = 500,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE,
+ .max_keysize = SM4_KEY_SIZE,
+ .setkey = sm4_setkey,
+ .encrypt = ecb_encrypt,
+ .decrypt = ecb_decrypt,
+ }, {
+ .base = {
+ .cra_name = "cbc(sm4)",
+ .cra_driver_name = "cbc-sm4-sve-ce",
+ .cra_priority = 500,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE,
+ .max_keysize = SM4_KEY_SIZE,
+ .ivsize = SM4_BLOCK_SIZE,
+ .setkey = sm4_setkey,
+ .encrypt = cbc_encrypt,
+ .decrypt = cbc_decrypt,
+ }, {
+ .base = {
+ .cra_name = "cfb(sm4)",
+ .cra_driver_name = "cfb-sm4-sve-ce",
+ .cra_priority = 500,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE,
+ .max_keysize = SM4_KEY_SIZE,
+ .ivsize = SM4_BLOCK_SIZE,
+ .chunksize = SM4_BLOCK_SIZE,
+ .setkey = sm4_setkey,
+ .encrypt = cfb_encrypt,
+ .decrypt = cfb_decrypt,
+ }, {
+ .base = {
+ .cra_name = "ctr(sm4)",
+ .cra_driver_name = "ctr-sm4-sve-ce",
+ .cra_priority = 500,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = SM4_KEY_SIZE,
+ .max_keysize = SM4_KEY_SIZE,
+ .ivsize = SM4_BLOCK_SIZE,
+ .chunksize = SM4_BLOCK_SIZE,
+ .setkey = sm4_setkey,
+ .encrypt = ctr_crypt,
+ .decrypt = ctr_crypt,
+ }
+};
+
+static int __init sm4_sve_ce_init(void)
+{
+ if (sm4_sve_get_vl() <= 16)
+ return -ENODEV;
+
+ return crypto_register_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
+}
+
+static void __exit sm4_sve_ce_exit(void)
+{
+ crypto_unregister_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
+}
+
+module_cpu_feature_match(SVESM4, sm4_sve_ce_init);
+module_exit(sm4_sve_ce_exit);
+
+MODULE_DESCRIPTION("SM4 ECB/CBC/CFB/CTR using ARMv9 Crypto Extensions with SVE2");
+MODULE_ALIAS_CRYPTO("sm4-sve-ce");
+MODULE_ALIAS_CRYPTO("sm4");
+MODULE_ALIAS_CRYPTO("ecb(sm4)");
+MODULE_ALIAS_CRYPTO("cbc(sm4)");
+MODULE_ALIAS_CRYPTO("cfb(sm4)");
+MODULE_ALIAS_CRYPTO("ctr(sm4)");
+MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
+MODULE_LICENSE("GPL v2");
--
2.24.3 (Apple Git-128)

2022-09-26 10:15:39

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH 16/16] crypto: arm64/sm4 - add ARMv9 SVE cryptography acceleration implementation

(cc Mark Brown)

Hello Tianjia,

On Mon, 26 Sept 2022 at 11:37, Tianjia Zhang
<[email protected]> wrote:
>
> Scalable Vector Extension (SVE) is the next-generation SIMD extension for
> arm64. SVE allows flexible vector length implementations with a range of
> possible values in CPU implementations. The vector length can vary from a
> minimum of 128 bits up to a maximum of 2048 bits, at 128-bit increments.
> The SVE design guarantees that the same application can run on different
> implementations that support SVE, without the need to recompile the code.
>
> SVE was originally introduced by ARMv8, and ARMv9 introduced SVE2 to
> expand and improve it. Similar to the Crypto Extension supported by the
> NEON instruction set for the algorithm, SVE also supports the similar
> instructions, called cryptography acceleration instructions, but this is
> also optional instruction set.
>
> This patch uses SM4 cryptography acceleration instructions and SVE2
> instructions to optimize the SM4 algorithm for ECB/CBC/CFB/CTR modes.
> Since the encryption of CBC/CFB cannot be parallelized, the Crypto
> Extension instruction is used.
>

Given that we currently do not support the use of SVE in kernel mode,
this patch cannot be accepted at this time (but the rest of the series
looks reasonable to me, although I have only skimmed over the patches)

In view of the disappointing benchmark results below, I don't think
this is worth the hassle at the moment. If we can find a case where
using SVE in kernel mode truly makes a [favorable] difference, we can
revisit this, but not without a thorough analysis of the impact it
will have to support SVE in the kernel. Also, the fact that SVE may
also cover cryptographic extensions does not necessarily imply that a
micro-architecture will perform those crypto transformations in
parallel and so the performance may be the same even if VL > 128.

In summary, please drop this patch for now, and once there are more
encouraging performance numbers, please resubmit it as part of a
series that explicitly enables SVE in kernel mode on arm64, and
documents the requirements and constraints.

I have cc'ed Mark who has been working on the SVE support., who might
have something to add here as well.

Thanks,
Ard.



> Since no test environment with a Vector Length (VL) greater than 128 bits
> was found, the performance data was obtained on a machine with a VL is
> 128 bits, because this driver is enabled when the VL is greater than 128
> bits, so this performance is only for reference. It can be seen from the
> data that there is little difference between the data optimized by Crypto
> Extension and SVE (VL=128 bits), and the optimization effect will be more
> obvious when VL=256 bits or longer.
>
> Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218 mode
> of tcrypt, and compared with that optimized by Crypto Extension. The
> abscissas are blocks of different lengths. The data is tabulated and the
> unit is Mb/s:
>
> sm4-ce | 16 64 128 256 1024 1420 4096
> ------------+--------------------------------------------------------------
> ECB enc | 315.18 1162.65 1815.66 2553.50 3692.91 3727.20 4001.93
> ECB dec | 316.06 1172.97 1817.81 2554.66 3692.18 3786.54 4001.93
> CBC enc | 304.82 629.54 768.65 864.72 953.90 963.32 974.06
> CBC dec | 306.05 1142.53 1805.11 2481.67 3522.06 3587.87 3790.99
> CFB enc | 309.48 635.70 774.44 865.85 950.62 952.68 968.24
> CFB dec | 315.98 1170.38 1828.75 2509.72 3543.63 3539.40 3793.25
> CTR enc | 285.83 1036.59 1583.50 2147.26 2933.54 2954.66 3041.14
> CTR dec | 285.29 1037.47 1584.67 2145.51 2934.10 2950.89 3041.62
>
> sm4-sve-ce (VL = 128 bits)
> ECB enc | 310.00 1154.70 1813.26 2579.74 3766.90 3869.45 4100.26
> ECB dec | 315.60 1176.22 1838.06 2593.69 3774.95 3878.42 4098.83
> CBC enc | 303.44 622.65 764.67 861.40 953.18 963.05 973.77
> CBC dec | 302.13 1091.15 1689.10 2267.79 3182.84 3242.68 3408.92
> CFB enc | 296.62 620.41 762.94 858.96 948.18 956.04 967.67
> CFB dec | 291.23 1065.50 1637.33 2228.12 3158.52 3213.35 3403.83
> CTR enc | 272.27 959.35 1466.34 1934.24 2562.80 2595.87 2695.15
> CTR dec | 273.40 963.65 1471.83 1938.97 2563.12 2597.25 2694.54
>
> Signed-off-by: Tianjia Zhang <[email protected]>
> ---
> arch/arm64/crypto/Kconfig | 19 +
> arch/arm64/crypto/Makefile | 3 +
> arch/arm64/crypto/sm4-sve-ce-core.S | 1028 +++++++++++++++++++++++++++
> arch/arm64/crypto/sm4-sve-ce-glue.c | 332 +++++++++
> 4 files changed, 1382 insertions(+)
> create mode 100644 arch/arm64/crypto/sm4-sve-ce-core.S
> create mode 100644 arch/arm64/crypto/sm4-sve-ce-glue.c
>
> diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
> index 6793d5bc3ee5..bbb5a7a08af5 100644
> --- a/arch/arm64/crypto/Kconfig
> +++ b/arch/arm64/crypto/Kconfig
> @@ -249,6 +249,25 @@ config CRYPTO_SM4_ARM64_CE_BLK
> - ARMv8 Crypto Extensions
> - NEON (Advanced SIMD) extensions
>
> +config CRYPTO_SM4_ARM64_SVE_CE_BLK
> + tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (ARMv9 cryptography acceleration with SVE2)"
> + depends on KERNEL_MODE_NEON
> + select CRYPTO_SKCIPHER
> + select CRYPTO_SM4
> + select CRYPTO_SM4_ARM64_CE_BLK
> + help
> + Length-preserving ciphers: SM4 cipher algorithms (OSCCA GB/T 32907-2016)
> + with block cipher modes:
> + - ECB (Electronic Codebook) mode (NIST SP800-38A)
> + - CBC (Cipher Block Chaining) mode (NIST SP800-38A)
> + - CFB (Cipher Feedback) mode (NIST SP800-38A)
> + - CTR (Counter) mode (NIST SP800-38A)
> +
> + Architecture: arm64 using:
> + - ARMv8 Crypto Extensions
> + - ARMv9 cryptography acceleration with SVE2
> + - NEON (Advanced SIMD) extensions
> +
> config CRYPTO_SM4_ARM64_NEON_BLK
> tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (NEON)"
> depends on KERNEL_MODE_NEON
> diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
> index 4818e204c2ac..355dd9053434 100644
> --- a/arch/arm64/crypto/Makefile
> +++ b/arch/arm64/crypto/Makefile
> @@ -38,6 +38,9 @@ sm4-ce-gcm-y := sm4-ce-gcm-glue.o sm4-ce-gcm-core.o
> obj-$(CONFIG_CRYPTO_SM4_ARM64_NEON_BLK) += sm4-neon.o
> sm4-neon-y := sm4-neon-glue.o sm4-neon-core.o
>
> +obj-$(CONFIG_CRYPTO_SM4_ARM64_SVE_CE_BLK) += sm4-sve-ce.o
> +sm4-sve-ce-y := sm4-sve-ce-glue.o sm4-sve-ce-core.o
> +
> obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o
> ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o
>
> diff --git a/arch/arm64/crypto/sm4-sve-ce-core.S b/arch/arm64/crypto/sm4-sve-ce-core.S
> new file mode 100644
> index 000000000000..caecbdf2536c
> --- /dev/null
> +++ b/arch/arm64/crypto/sm4-sve-ce-core.S
> @@ -0,0 +1,1028 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * SM4 Cipher Algorithm for ARMv9 Crypto Extensions with SVE2
> + * as specified in
> + * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-10.html
> + *
> + * Copyright (C) 2022, Alibaba Group.
> + * Copyright (C) 2022 Tianjia Zhang <[email protected]>
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/assembler.h>
> +
> +.arch armv8-a+crypto+sve+sve2
> +
> +.irp b, 0, 15, 24, 25, 26, 27, 28, 29, 30, 31
> + .set .Lv\b\().4s, \b
> +.endr
> +
> +.irp b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, \
> + 16, 24, 25, 26, 27, 28, 29, 30, 31
> + .set .Lz\b\().s, \b
> +.endr
> +
> +.macro sm4e, vd, vn
> + .inst 0xcec08400 | (.L\vn << 5) | .L\vd
> +.endm
> +
> +.macro sm4e_sve, zd, zn
> + .inst 0x4523e000 | (.L\zn << 5) | .L\zd
> +.endm
> +
> +
> +/* Register macros */
> +
> +#define RCTR z16
> +#define RCTRv v16
> +#define RIV z16
> +#define RIVv v16
> +#define RSWAP128 z17
> +#define RZERO z18
> +#define RLE128_INC z19
> +
> +#define RTMP0 z20
> +#define RTMP0v v20
> +#define RTMP1 z21
> +#define RTMP2 z22
> +#define RTMP3 z23
> +
> +
> +/* Helper macros. */
> +
> +#define SM4_PREPARE(ptr) \
> + adr_l x7, .Lbswap128_mask; \
> + ptrue p0.b, ALL; \
> + rdvl x5, #1; \
> + ld1b {RSWAP128.b}, p0/z, [x7]; \
> + \
> + ld1 {v24.16b-v27.16b}, [ptr], #64; \
> + ld1 {v28.16b-v31.16b}, [ptr]; \
> + dup z24.q, z24.q[0]; \
> + dup z25.q, z25.q[0]; \
> + dup z26.q, z26.q[0]; \
> + dup z27.q, z27.q[0]; \
> + dup z28.q, z28.q[0]; \
> + dup z29.q, z29.q[0]; \
> + dup z30.q, z30.q[0]; \
> + dup z31.q, z31.q[0];
> +
> +#define SM4_SVE_CE_CRYPT_BLK(b0) \
> + revb b0.s, p0/m, b0.s; \
> + sm4e_sve b0.s, z24.s; \
> + sm4e_sve b0.s, z25.s; \
> + sm4e_sve b0.s, z26.s; \
> + sm4e_sve b0.s, z27.s; \
> + sm4e_sve b0.s, z28.s; \
> + sm4e_sve b0.s, z29.s; \
> + sm4e_sve b0.s, z30.s; \
> + sm4e_sve b0.s, z31.s; \
> + tbl b0.b, {b0.b}, RSWAP128.b; \
> + revb b0.s, p0/m, b0.s;
> +
> +#define SM4_SVE_CE_CRYPT_BLK4(b0, b1, b2, b3) \
> + revb b0.s, p0/m, b0.s; \
> + revb b1.s, p0/m, b1.s; \
> + revb b2.s, p0/m, b2.s; \
> + revb b3.s, p0/m, b3.s; \
> + sm4e_sve b0.s, z24.s; \
> + sm4e_sve b1.s, z24.s; \
> + sm4e_sve b2.s, z24.s; \
> + sm4e_sve b3.s, z24.s; \
> + sm4e_sve b0.s, z25.s; \
> + sm4e_sve b1.s, z25.s; \
> + sm4e_sve b2.s, z25.s; \
> + sm4e_sve b3.s, z25.s; \
> + sm4e_sve b0.s, z26.s; \
> + sm4e_sve b1.s, z26.s; \
> + sm4e_sve b2.s, z26.s; \
> + sm4e_sve b3.s, z26.s; \
> + sm4e_sve b0.s, z27.s; \
> + sm4e_sve b1.s, z27.s; \
> + sm4e_sve b2.s, z27.s; \
> + sm4e_sve b3.s, z27.s; \
> + sm4e_sve b0.s, z28.s; \
> + sm4e_sve b1.s, z28.s; \
> + sm4e_sve b2.s, z28.s; \
> + sm4e_sve b3.s, z28.s; \
> + sm4e_sve b0.s, z29.s; \
> + sm4e_sve b1.s, z29.s; \
> + sm4e_sve b2.s, z29.s; \
> + sm4e_sve b3.s, z29.s; \
> + sm4e_sve b0.s, z30.s; \
> + sm4e_sve b1.s, z30.s; \
> + sm4e_sve b2.s, z30.s; \
> + sm4e_sve b3.s, z30.s; \
> + sm4e_sve b0.s, z31.s; \
> + sm4e_sve b1.s, z31.s; \
> + sm4e_sve b2.s, z31.s; \
> + sm4e_sve b3.s, z31.s; \
> + tbl b0.b, {b0.b}, RSWAP128.b; \
> + tbl b1.b, {b1.b}, RSWAP128.b; \
> + tbl b2.b, {b2.b}, RSWAP128.b; \
> + tbl b3.b, {b3.b}, RSWAP128.b; \
> + revb b0.s, p0/m, b0.s; \
> + revb b1.s, p0/m, b1.s; \
> + revb b2.s, p0/m, b2.s; \
> + revb b3.s, p0/m, b3.s;
> +
> +#define SM4_SVE_CE_CRYPT_BLK8(b0, b1, b2, b3, b4, b5, b6, b7) \
> + revb b0.s, p0/m, b0.s; \
> + revb b1.s, p0/m, b1.s; \
> + revb b2.s, p0/m, b2.s; \
> + revb b3.s, p0/m, b3.s; \
> + revb b4.s, p0/m, b4.s; \
> + revb b5.s, p0/m, b5.s; \
> + revb b6.s, p0/m, b6.s; \
> + revb b7.s, p0/m, b7.s; \
> + sm4e_sve b0.s, z24.s; \
> + sm4e_sve b1.s, z24.s; \
> + sm4e_sve b2.s, z24.s; \
> + sm4e_sve b3.s, z24.s; \
> + sm4e_sve b4.s, z24.s; \
> + sm4e_sve b5.s, z24.s; \
> + sm4e_sve b6.s, z24.s; \
> + sm4e_sve b7.s, z24.s; \
> + sm4e_sve b0.s, z25.s; \
> + sm4e_sve b1.s, z25.s; \
> + sm4e_sve b2.s, z25.s; \
> + sm4e_sve b3.s, z25.s; \
> + sm4e_sve b4.s, z25.s; \
> + sm4e_sve b5.s, z25.s; \
> + sm4e_sve b6.s, z25.s; \
> + sm4e_sve b7.s, z25.s; \
> + sm4e_sve b0.s, z26.s; \
> + sm4e_sve b1.s, z26.s; \
> + sm4e_sve b2.s, z26.s; \
> + sm4e_sve b3.s, z26.s; \
> + sm4e_sve b4.s, z26.s; \
> + sm4e_sve b5.s, z26.s; \
> + sm4e_sve b6.s, z26.s; \
> + sm4e_sve b7.s, z26.s; \
> + sm4e_sve b0.s, z27.s; \
> + sm4e_sve b1.s, z27.s; \
> + sm4e_sve b2.s, z27.s; \
> + sm4e_sve b3.s, z27.s; \
> + sm4e_sve b4.s, z27.s; \
> + sm4e_sve b5.s, z27.s; \
> + sm4e_sve b6.s, z27.s; \
> + sm4e_sve b7.s, z27.s; \
> + sm4e_sve b0.s, z28.s; \
> + sm4e_sve b1.s, z28.s; \
> + sm4e_sve b2.s, z28.s; \
> + sm4e_sve b3.s, z28.s; \
> + sm4e_sve b4.s, z28.s; \
> + sm4e_sve b5.s, z28.s; \
> + sm4e_sve b6.s, z28.s; \
> + sm4e_sve b7.s, z28.s; \
> + sm4e_sve b0.s, z29.s; \
> + sm4e_sve b1.s, z29.s; \
> + sm4e_sve b2.s, z29.s; \
> + sm4e_sve b3.s, z29.s; \
> + sm4e_sve b4.s, z29.s; \
> + sm4e_sve b5.s, z29.s; \
> + sm4e_sve b6.s, z29.s; \
> + sm4e_sve b7.s, z29.s; \
> + sm4e_sve b0.s, z30.s; \
> + sm4e_sve b1.s, z30.s; \
> + sm4e_sve b2.s, z30.s; \
> + sm4e_sve b3.s, z30.s; \
> + sm4e_sve b4.s, z30.s; \
> + sm4e_sve b5.s, z30.s; \
> + sm4e_sve b6.s, z30.s; \
> + sm4e_sve b7.s, z30.s; \
> + sm4e_sve b0.s, z31.s; \
> + sm4e_sve b1.s, z31.s; \
> + sm4e_sve b2.s, z31.s; \
> + sm4e_sve b3.s, z31.s; \
> + sm4e_sve b4.s, z31.s; \
> + sm4e_sve b5.s, z31.s; \
> + sm4e_sve b6.s, z31.s; \
> + sm4e_sve b7.s, z31.s; \
> + tbl b0.b, {b0.b}, RSWAP128.b; \
> + tbl b1.b, {b1.b}, RSWAP128.b; \
> + tbl b2.b, {b2.b}, RSWAP128.b; \
> + tbl b3.b, {b3.b}, RSWAP128.b; \
> + tbl b4.b, {b4.b}, RSWAP128.b; \
> + tbl b5.b, {b5.b}, RSWAP128.b; \
> + tbl b6.b, {b6.b}, RSWAP128.b; \
> + tbl b7.b, {b7.b}, RSWAP128.b; \
> + revb b0.s, p0/m, b0.s; \
> + revb b1.s, p0/m, b1.s; \
> + revb b2.s, p0/m, b2.s; \
> + revb b3.s, p0/m, b3.s; \
> + revb b4.s, p0/m, b4.s; \
> + revb b5.s, p0/m, b5.s; \
> + revb b6.s, p0/m, b6.s; \
> + revb b7.s, p0/m, b7.s;
> +
> +#define SM4_CE_CRYPT_BLK(b0) \
> + rev32 b0.16b, b0.16b; \
> + sm4e b0.4s, v24.4s; \
> + sm4e b0.4s, v25.4s; \
> + sm4e b0.4s, v26.4s; \
> + sm4e b0.4s, v27.4s; \
> + sm4e b0.4s, v28.4s; \
> + sm4e b0.4s, v29.4s; \
> + sm4e b0.4s, v30.4s; \
> + sm4e b0.4s, v31.4s; \
> + rev64 b0.4s, b0.4s; \
> + ext b0.16b, b0.16b, b0.16b, #8; \
> + rev32 b0.16b, b0.16b;
> +
> +#define inc_le128(zctr) \
> + mov RCTRv.d[1], x8; \
> + mov RCTRv.d[0], x7; \
> + mov zctr.d, RLE128_INC.d; \
> + dup RCTR.q, RCTR.q[0]; \
> + adds x8, x8, x5, LSR #4; \
> + adclt zctr.d, RCTR.d, RZERO.d; \
> + adclt RCTR.d, zctr.d, RZERO.d; \
> + adc x7, x7, xzr; \
> + trn1 zctr.d, RCTR.d, zctr.d; \
> + revb zctr.d, p0/m, zctr.d;
> +
> +#define inc_le128_4x(zctr0, zctr1, zctr2, zctr3) \
> + mov v8.d[1], x8; \
> + mov v8.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr0.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v9.d[1], x8; \
> + mov v9.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr1.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v10.d[1], x8; \
> + mov v10.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr2.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v11.d[1], x8; \
> + mov v11.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr3.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + dup z8.q, z8.q[0]; \
> + dup z9.q, z9.q[0]; \
> + dup z10.q, z10.q[0]; \
> + dup z11.q, z11.q[0]; \
> + adclt zctr0.d, z8.d, RZERO.d; \
> + adclt zctr1.d, z9.d, RZERO.d; \
> + adclt zctr2.d, z10.d, RZERO.d; \
> + adclt zctr3.d, z11.d, RZERO.d; \
> + adclt z8.d, zctr0.d, RZERO.d; \
> + adclt z9.d, zctr1.d, RZERO.d; \
> + adclt z10.d, zctr2.d, RZERO.d; \
> + adclt z11.d, zctr3.d, RZERO.d; \
> + trn1 zctr0.d, z8.d, zctr0.d; \
> + trn1 zctr1.d, z9.d, zctr1.d; \
> + trn1 zctr2.d, z10.d, zctr2.d; \
> + trn1 zctr3.d, z11.d, zctr3.d; \
> + revb zctr0.d, p0/m, zctr0.d; \
> + revb zctr1.d, p0/m, zctr1.d; \
> + revb zctr2.d, p0/m, zctr2.d; \
> + revb zctr3.d, p0/m, zctr3.d;
> +
> +#define inc_le128_8x(zctr0, zctr1, zctr2, zctr3, \
> + zctr4, zctr5, zctr6, zctr7) \
> + mov v8.d[1], x8; \
> + mov v8.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr0.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v9.d[1], x8; \
> + mov v9.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr1.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v10.d[1], x8; \
> + mov v10.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr2.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v11.d[1], x8; \
> + mov v11.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr3.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v12.d[1], x8; \
> + mov v12.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr4.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v13.d[1], x8; \
> + mov v13.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr5.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v14.d[1], x8; \
> + mov v14.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr6.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + mov v15.d[1], x8; \
> + mov v15.d[0], x7; \
> + adds x8, x8, x5, LSR #4; \
> + mov zctr7.d, RLE128_INC.d; \
> + adc x7, x7, xzr; \
> + dup z8.q, z8.q[0]; \
> + dup z9.q, z9.q[0]; \
> + dup z10.q, z10.q[0]; \
> + dup z11.q, z11.q[0]; \
> + dup z12.q, z12.q[0]; \
> + dup z13.q, z13.q[0]; \
> + dup z14.q, z14.q[0]; \
> + dup z15.q, z15.q[0]; \
> + adclt zctr0.d, z8.d, RZERO.d; \
> + adclt zctr1.d, z9.d, RZERO.d; \
> + adclt zctr2.d, z10.d, RZERO.d; \
> + adclt zctr3.d, z11.d, RZERO.d; \
> + adclt zctr4.d, z12.d, RZERO.d; \
> + adclt zctr5.d, z13.d, RZERO.d; \
> + adclt zctr6.d, z14.d, RZERO.d; \
> + adclt zctr7.d, z15.d, RZERO.d; \
> + adclt z8.d, zctr0.d, RZERO.d; \
> + adclt z9.d, zctr1.d, RZERO.d; \
> + adclt z10.d, zctr2.d, RZERO.d; \
> + adclt z11.d, zctr3.d, RZERO.d; \
> + adclt z12.d, zctr4.d, RZERO.d; \
> + adclt z13.d, zctr5.d, RZERO.d; \
> + adclt z14.d, zctr6.d, RZERO.d; \
> + adclt z15.d, zctr7.d, RZERO.d; \
> + trn1 zctr0.d, z8.d, zctr0.d; \
> + trn1 zctr1.d, z9.d, zctr1.d; \
> + trn1 zctr2.d, z10.d, zctr2.d; \
> + trn1 zctr3.d, z11.d, zctr3.d; \
> + trn1 zctr4.d, z12.d, zctr4.d; \
> + trn1 zctr5.d, z13.d, zctr5.d; \
> + trn1 zctr6.d, z14.d, zctr6.d; \
> + trn1 zctr7.d, z15.d, zctr7.d; \
> + revb zctr0.d, p0/m, zctr0.d; \
> + revb zctr1.d, p0/m, zctr1.d; \
> + revb zctr2.d, p0/m, zctr2.d; \
> + revb zctr3.d, p0/m, zctr3.d; \
> + revb zctr4.d, p0/m, zctr4.d; \
> + revb zctr5.d, p0/m, zctr5.d; \
> + revb zctr6.d, p0/m, zctr6.d; \
> + revb zctr7.d, p0/m, zctr7.d;
> +
> +
> +.align 3
> +SYM_FUNC_START(sm4_sve_ce_crypt)
> + /* input:
> + * x0: round key array, CTX
> + * x1: dst
> + * x2: src
> + * w3: nblocks
> + */
> + uxtw x3, w3
> + SM4_PREPARE(x0)
> +
> +.Lcrypt_loop_8x:
> + sub x3, x3, x5, LSR #1 /* x3 - (8 * VL) */
> + tbnz x3, #63, .Lcrypt_4x
> +
> + ld1b {z0.b}, p0/z, [x2]
> + ld1b {z1.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z2.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z3.b}, p0/z, [x2, #3, MUL VL]
> + ld1b {z4.b}, p0/z, [x2, #4, MUL VL]
> + ld1b {z5.b}, p0/z, [x2, #5, MUL VL]
> + ld1b {z6.b}, p0/z, [x2, #6, MUL VL]
> + ld1b {z7.b}, p0/z, [x2, #7, MUL VL]
> +
> + SM4_SVE_CE_CRYPT_BLK8(z0, z1, z2, z3, z4, z5, z6, z7)
> +
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> + st1b {z4.b}, p0, [x1, #4, MUL VL]
> + st1b {z5.b}, p0, [x1, #5, MUL VL]
> + st1b {z6.b}, p0, [x1, #6, MUL VL]
> + st1b {z7.b}, p0, [x1, #7, MUL VL]
> +
> + addvl x2, x2, #8
> + addvl x1, x1, #8
> +
> + cbz x3, .Lcrypt_end
> + b .Lcrypt_loop_8x
> +
> +.Lcrypt_4x:
> + add x3, x3, x5, LSR #1
> + cmp x3, x5, LSR #2
> + blt .Lcrypt_loop_1x
> +
> + sub x3, x3, x5, LSR #2 /* x3 - (4 * VL) */
> +
> + ld1b {z0.b}, p0/z, [x2]
> + ld1b {z1.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z2.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z3.b}, p0/z, [x2, #3, MUL VL]
> +
> + SM4_SVE_CE_CRYPT_BLK4(z0, z1, z2, z3)
> +
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> +
> + addvl x2, x2, #4
> + addvl x1, x1, #4
> +
> + cbz x3, .Lcrypt_end
> +
> +.Lcrypt_loop_1x:
> + cmp x3, x5, LSR #4
> + blt .Lcrypt_ce_loop_1x
> +
> + sub x3, x3, x5, LSR #4 /* x3 - VL */
> +
> + ld1b {z0.b}, p0/z, [x2]
> +
> + SM4_SVE_CE_CRYPT_BLK(z0)
> +
> + st1b {z0.b}, p0, [x1]
> +
> + addvl x2, x2, #1
> + addvl x1, x1, #1
> +
> + cbz x3, .Lcrypt_end
> + b .Lcrypt_loop_1x
> +
> +.Lcrypt_ce_loop_1x:
> + sub x3, x3, #1
> +
> + ld1 {v0.16b}, [x2], #16
> + SM4_CE_CRYPT_BLK(v0)
> + st1 {v0.16b}, [x1], #16
> +
> + cbnz x3, .Lcrypt_ce_loop_1x
> +
> +.Lcrypt_end:
> + ret
> +SYM_FUNC_END(sm4_sve_ce_crypt)
> +
> +.align 3
> +SYM_FUNC_START(sm4_sve_ce_cbc_dec)
> + /* input:
> + * x0: round key array, CTX
> + * x1: dst
> + * x2: src
> + * x3: iv (big endian, 128 bit)
> + * w4: nblocks
> + */
> + uxtw x4, w4
> + SM4_PREPARE(x0)
> +
> + ld1 {RIVv.16b}, [x3]
> + ext RIV.b, RIV.b, RIV.b, #16
> +
> +.Lcbc_dec_loop_8x:
> + sub x4, x4, x5, LSR #1 /* x4 - (8 * VL) */
> + tbnz x4, #63, .Lcbc_dec_4x
> +
> + ld1b {z15.b}, p0/z, [x2]
> + ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
> + ld1b {z11.b}, p0/z, [x2, #4, MUL VL]
> + ld1b {z10.b}, p0/z, [x2, #5, MUL VL]
> + ld1b {z9.b}, p0/z, [x2, #6, MUL VL]
> + ld1b {z8.b}, p0/z, [x2, #7, MUL VL]
> + rev z0.b, z15.b
> + rev z1.b, z14.b
> + rev z2.b, z13.b
> + rev z3.b, z12.b
> + rev z4.b, z11.b
> + rev z5.b, z10.b
> + rev z6.b, z9.b
> + rev z7.b, z8.b
> + rev RTMP0.b, RIV.b
> + ext z7.b, z7.b, z6.b, #16
> + ext z6.b, z6.b, z5.b, #16
> + ext z5.b, z5.b, z4.b, #16
> + ext z4.b, z4.b, z3.b, #16
> + ext z3.b, z3.b, z2.b, #16
> + ext z2.b, z2.b, z1.b, #16
> + ext z1.b, z1.b, z0.b, #16
> + ext z0.b, z0.b, RTMP0.b, #16
> + rev z7.b, z7.b
> + rev z6.b, z6.b
> + rev z5.b, z5.b
> + rev z4.b, z4.b
> + rev z3.b, z3.b
> + rev z2.b, z2.b
> + rev z1.b, z1.b
> + rev z0.b, z0.b
> + mov RIV.d, z8.d
> +
> + SM4_SVE_CE_CRYPT_BLK8(z15, z14, z13, z12, z11, z10, z9, z8)
> +
> + eor z0.d, z0.d, z15.d
> + eor z1.d, z1.d, z14.d
> + eor z2.d, z2.d, z13.d
> + eor z3.d, z3.d, z12.d
> + eor z4.d, z4.d, z11.d
> + eor z5.d, z5.d, z10.d
> + eor z6.d, z6.d, z9.d
> + eor z7.d, z7.d, z8.d
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> + st1b {z4.b}, p0, [x1, #4, MUL VL]
> + st1b {z5.b}, p0, [x1, #5, MUL VL]
> + st1b {z6.b}, p0, [x1, #6, MUL VL]
> + st1b {z7.b}, p0, [x1, #7, MUL VL]
> +
> + addvl x2, x2, #8
> + addvl x1, x1, #8
> +
> + cbz x4, .Lcbc_dec_end
> + b .Lcbc_dec_loop_8x
> +
> +.Lcbc_dec_4x:
> + add x4, x4, x5, LSR #1
> + cmp x4, x5, LSR #2
> + blt .Lcbc_dec_loop_1x
> +
> + sub x4, x4, x5, LSR #2 /* x4 - (4 * VL) */
> +
> + ld1b {z15.b}, p0/z, [x2]
> + ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
> + rev z0.b, z15.b
> + rev z1.b, z14.b
> + rev z2.b, z13.b
> + rev z3.b, z12.b
> + rev RTMP0.b, RIV.b
> + ext z3.b, z3.b, z2.b, #16
> + ext z2.b, z2.b, z1.b, #16
> + ext z1.b, z1.b, z0.b, #16
> + ext z0.b, z0.b, RTMP0.b, #16
> + rev z3.b, z3.b
> + rev z2.b, z2.b
> + rev z1.b, z1.b
> + rev z0.b, z0.b
> + mov RIV.d, z12.d
> +
> + SM4_SVE_CE_CRYPT_BLK4(z15, z14, z13, z12)
> +
> + eor z0.d, z0.d, z15.d
> + eor z1.d, z1.d, z14.d
> + eor z2.d, z2.d, z13.d
> + eor z3.d, z3.d, z12.d
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> +
> + addvl x2, x2, #4
> + addvl x1, x1, #4
> +
> + cbz x4, .Lcbc_dec_end
> +
> +.Lcbc_dec_loop_1x:
> + cmp x4, x5, LSR #4
> + blt .Lcbc_dec_ce
> +
> + sub x4, x4, x5, LSR #4 /* x4 - VL */
> +
> + ld1b {z15.b}, p0/z, [x2]
> + rev RTMP0.b, RIV.b
> + rev z0.b, z15.b
> + ext z0.b, z0.b, RTMP0.b, #16
> + rev z0.b, z0.b
> + mov RIV.d, z15.d
> +
> + SM4_SVE_CE_CRYPT_BLK(z15)
> +
> + eor z0.d, z0.d, z15.d
> + st1b {z0.b}, p0, [x1]
> +
> + addvl x2, x2, #1
> + addvl x1, x1, #1
> +
> + cbz x4, .Lcbc_dec_end
> + b .Lcbc_dec_loop_1x
> +
> +.Lcbc_dec_ce:
> + rev RIV.s, RIV.s
> + tbl RIV.b, {RIV.b}, RSWAP128.b
> +
> +.Lcbc_dec_ce_loop_1x:
> + sub x4, x4, #1
> +
> + ld1 {v15.16b}, [x2], #16
> + mov v0.16b, RIVv.16b
> + mov RIVv.16b, v15.16b
> + SM4_CE_CRYPT_BLK(v15)
> + eor v0.16b, v0.16b, v15.16b
> + st1 {v0.16b}, [x1], #16
> +
> + cbnz x4, .Lcbc_dec_ce_loop_1x
> +
> + ext RIV.b, RIV.b, RIV.b, #16
> +
> +.Lcbc_dec_end:
> + /* store new IV */
> + rev RIV.s, RIV.s
> + tbl RIV.b, {RIV.b}, RSWAP128.b
> + st1 {RIVv.16b}, [x3]
> +
> + ret
> +SYM_FUNC_END(sm4_sve_ce_cbc_dec)
> +
> +.align 3
> +SYM_FUNC_START(sm4_sve_ce_cfb_dec)
> + /* input:
> + * x0: round key array, CTX
> + * x1: dst
> + * x2: src
> + * x3: iv (big endian, 128 bit)
> + * w4: nblocks
> + */
> + uxtw x4, w4
> + SM4_PREPARE(x0)
> +
> + ld1 {RIVv.16b}, [x3]
> + ext RIV.b, RIV.b, RIV.b, #16
> +
> +.Lcfb_dec_loop_8x:
> + sub x4, x4, x5, LSR #1 /* x4 - (8 * VL) */
> + tbnz x4, #63, .Lcfb_dec_4x
> +
> + ld1b {z15.b}, p0/z, [x2]
> + ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
> + ld1b {z11.b}, p0/z, [x2, #4, MUL VL]
> + ld1b {z10.b}, p0/z, [x2, #5, MUL VL]
> + ld1b {z9.b}, p0/z, [x2, #6, MUL VL]
> + ld1b {z8.b}, p0/z, [x2, #7, MUL VL]
> + rev z0.b, z15.b
> + rev z1.b, z14.b
> + rev z2.b, z13.b
> + rev z3.b, z12.b
> + rev z4.b, z11.b
> + rev z5.b, z10.b
> + rev z6.b, z9.b
> + rev z7.b, z8.b
> + rev RTMP0.b, RIV.b
> + ext z7.b, z7.b, z6.b, #16
> + ext z6.b, z6.b, z5.b, #16
> + ext z5.b, z5.b, z4.b, #16
> + ext z4.b, z4.b, z3.b, #16
> + ext z3.b, z3.b, z2.b, #16
> + ext z2.b, z2.b, z1.b, #16
> + ext z1.b, z1.b, z0.b, #16
> + ext z0.b, z0.b, RTMP0.b, #16
> + rev z7.b, z7.b
> + rev z6.b, z6.b
> + rev z5.b, z5.b
> + rev z4.b, z4.b
> + rev z3.b, z3.b
> + rev z2.b, z2.b
> + rev z1.b, z1.b
> + rev z0.b, z0.b
> + mov RIV.d, z8.d
> +
> + SM4_SVE_CE_CRYPT_BLK8(z0, z1, z2, z3, z4, z5, z6, z7)
> +
> + eor z0.d, z0.d, z15.d
> + eor z1.d, z1.d, z14.d
> + eor z2.d, z2.d, z13.d
> + eor z3.d, z3.d, z12.d
> + eor z4.d, z4.d, z11.d
> + eor z5.d, z5.d, z10.d
> + eor z6.d, z6.d, z9.d
> + eor z7.d, z7.d, z8.d
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> + st1b {z4.b}, p0, [x1, #4, MUL VL]
> + st1b {z5.b}, p0, [x1, #5, MUL VL]
> + st1b {z6.b}, p0, [x1, #6, MUL VL]
> + st1b {z7.b}, p0, [x1, #7, MUL VL]
> +
> + addvl x2, x2, #8
> + addvl x1, x1, #8
> +
> + cbz x4, .Lcfb_dec_end
> + b .Lcfb_dec_loop_8x
> +
> +.Lcfb_dec_4x:
> + add x4, x4, x5, LSR #1
> + cmp x4, x5, LSR #2
> + blt .Lcfb_dec_loop_1x
> +
> + sub x4, x4, x5, LSR #2 /* x4 - (4 * VL) */
> +
> + ld1b {z15.b}, p0/z, [x2]
> + ld1b {z14.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z13.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z12.b}, p0/z, [x2, #3, MUL VL]
> + rev z0.b, z15.b
> + rev z1.b, z14.b
> + rev z2.b, z13.b
> + rev z3.b, z12.b
> + rev RTMP0.b, RIV.b
> + ext z3.b, z3.b, z2.b, #16
> + ext z2.b, z2.b, z1.b, #16
> + ext z1.b, z1.b, z0.b, #16
> + ext z0.b, z0.b, RTMP0.b, #16
> + rev z3.b, z3.b
> + rev z2.b, z2.b
> + rev z1.b, z1.b
> + rev z0.b, z0.b
> + mov RIV.d, z12.d
> +
> + SM4_SVE_CE_CRYPT_BLK4(z0, z1, z2, z3)
> +
> + eor z0.d, z0.d, z15.d
> + eor z1.d, z1.d, z14.d
> + eor z2.d, z2.d, z13.d
> + eor z3.d, z3.d, z12.d
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> +
> + addvl x2, x2, #4
> + addvl x1, x1, #4
> +
> + cbz x4, .Lcfb_dec_end
> +
> +.Lcfb_dec_loop_1x:
> + cmp x4, x5, LSR #4
> + blt .Lcfb_dec_ce
> +
> + sub x4, x4, x5, LSR #4 /* x4 - VL */
> +
> + ld1b {z15.b}, p0/z, [x2]
> + rev RTMP0.b, RIV.b
> + rev z0.b, z15.b
> + ext z0.b, z0.b, RTMP0.b, #16
> + rev z0.b, z0.b
> + mov RIV.d, z15.d
> +
> + SM4_SVE_CE_CRYPT_BLK(z0)
> +
> + eor z0.d, z0.d, z15.d
> + st1b {z0.b}, p0, [x1]
> +
> + addvl x2, x2, #1
> + addvl x1, x1, #1
> +
> + cbz x4, .Lcfb_dec_end
> + b .Lcfb_dec_loop_1x
> +
> +.Lcfb_dec_ce:
> + rev RIV.s, RIV.s
> + tbl RIV.b, {RIV.b}, RSWAP128.b
> +
> +.Lcfb_dec_ce_loop_1x:
> + sub x4, x4, #1
> +
> + ld1 {v15.16b}, [x2], #16
> + mov v0.16b, RIVv.16b
> + mov RIVv.16b, v15.16b
> + SM4_CE_CRYPT_BLK(v0)
> + eor v0.16b, v0.16b, v15.16b
> + st1 {v0.16b}, [x1], #16
> +
> + cbnz x4, .Lcfb_dec_ce_loop_1x
> +
> + ext RIV.b, RIV.b, RIV.b, #16
> +
> +.Lcfb_dec_end:
> + /* store new IV */
> + rev RIV.s, RIV.s
> + tbl RIV.b, {RIV.b}, RSWAP128.b
> + st1 {RIVv.16b}, [x3]
> +
> + ret
> +SYM_FUNC_END(sm4_sve_ce_cfb_dec)
> +
> +.align 3
> +SYM_FUNC_START(sm4_sve_ce_ctr_crypt)
> + /* input:
> + * x0: round key array, CTX
> + * x1: dst
> + * x2: src
> + * x3: ctr (big endian, 128 bit)
> + * w4: nblocks
> + */
> + uxtw x4, w4
> + SM4_PREPARE(x0)
> +
> + dup RZERO.d, #0
> + adr_l x6, .Lle128_inc
> + ld1b {RLE128_INC.b}, p0/z, [x6]
> +
> + ldp x7, x8, [x3]
> + rev x7, x7
> + rev x8, x8
> +
> +.Lctr_loop_8x:
> + sub x4, x4, x5, LSR #1 /* x4 - (8 * VL) */
> + tbnz x4, #63, .Lctr_4x
> +
> + inc_le128_8x(z0, z1, z2, z3, z4, z5, z6, z7)
> +
> + ld1b {z8.b}, p0/z, [x2]
> + ld1b {z9.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z10.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z11.b}, p0/z, [x2, #3, MUL VL]
> + ld1b {z12.b}, p0/z, [x2, #4, MUL VL]
> + ld1b {z13.b}, p0/z, [x2, #5, MUL VL]
> + ld1b {z14.b}, p0/z, [x2, #6, MUL VL]
> + ld1b {z15.b}, p0/z, [x2, #7, MUL VL]
> +
> + SM4_SVE_CE_CRYPT_BLK8(z0, z1, z2, z3, z4, z5, z6, z7)
> +
> + eor z0.d, z0.d, z8.d
> + eor z1.d, z1.d, z9.d
> + eor z2.d, z2.d, z10.d
> + eor z3.d, z3.d, z11.d
> + eor z4.d, z4.d, z12.d
> + eor z5.d, z5.d, z13.d
> + eor z6.d, z6.d, z14.d
> + eor z7.d, z7.d, z15.d
> +
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> + st1b {z4.b}, p0, [x1, #4, MUL VL]
> + st1b {z5.b}, p0, [x1, #5, MUL VL]
> + st1b {z6.b}, p0, [x1, #6, MUL VL]
> + st1b {z7.b}, p0, [x1, #7, MUL VL]
> +
> + addvl x2, x2, #8
> + addvl x1, x1, #8
> +
> + cbz x4, .Lctr_end
> + b .Lctr_loop_8x
> +
> +.Lctr_4x:
> + add x4, x4, x5, LSR #1
> + cmp x4, x5, LSR #2
> + blt .Lctr_loop_1x
> +
> + sub x4, x4, x5, LSR #2 /* x4 - (4 * VL) */
> +
> + inc_le128_4x(z0, z1, z2, z3)
> +
> + ld1b {z8.b}, p0/z, [x2]
> + ld1b {z9.b}, p0/z, [x2, #1, MUL VL]
> + ld1b {z10.b}, p0/z, [x2, #2, MUL VL]
> + ld1b {z11.b}, p0/z, [x2, #3, MUL VL]
> +
> + SM4_SVE_CE_CRYPT_BLK4(z0, z1, z2, z3)
> +
> + eor z0.d, z0.d, z8.d
> + eor z1.d, z1.d, z9.d
> + eor z2.d, z2.d, z10.d
> + eor z3.d, z3.d, z11.d
> +
> + st1b {z0.b}, p0, [x1]
> + st1b {z1.b}, p0, [x1, #1, MUL VL]
> + st1b {z2.b}, p0, [x1, #2, MUL VL]
> + st1b {z3.b}, p0, [x1, #3, MUL VL]
> +
> + addvl x2, x2, #4
> + addvl x1, x1, #4
> +
> + cbz x4, .Lctr_end
> +
> +.Lctr_loop_1x:
> + cmp x4, x5, LSR #4
> + blt .Lctr_ce_loop_1x
> +
> + sub x4, x4, x5, LSR #4 /* x4 - VL */
> +
> + inc_le128(z0)
> + ld1b {z8.b}, p0/z, [x2]
> +
> + SM4_SVE_CE_CRYPT_BLK(z0)
> +
> + eor z0.d, z0.d, z8.d
> + st1b {z0.b}, p0, [x1]
> +
> + addvl x2, x2, #1
> + addvl x1, x1, #1
> +
> + cbz x4, .Lctr_end
> + b .Lctr_loop_1x
> +
> +.Lctr_ce_loop_1x:
> + sub x4, x4, #1
> +
> + /* inc_le128 for CE */
> + mov v0.d[1], x8
> + mov v0.d[0], x7
> + adds x8, x8, #1
> + rev64 v0.16b, v0.16b
> + adc x7, x7, xzr
> +
> + ld1 {v8.16b}, [x2], #16
> +
> + SM4_CE_CRYPT_BLK(v0)
> +
> + eor v0.16b, v0.16b, v8.16b
> + st1 {v0.16b}, [x1], #16
> +
> + cbnz x4, .Lctr_ce_loop_1x
> +
> +.Lctr_end:
> + /* store new CTR */
> + rev x7, x7
> + rev x8, x8
> + stp x7, x8, [x3]
> +
> + ret
> +SYM_FUNC_END(sm4_sve_ce_ctr_crypt)
> +
> +.align 3
> +SYM_FUNC_START(sm4_sve_get_vl)
> + /* VL in bytes */
> + rdvl x0, #1
> +
> + ret
> +SYM_FUNC_END(sm4_sve_get_vl)
> +
> +
> + .section ".rodata", "a"
> + .align 4
> +.Lbswap128_mask:
> + .byte 0x0c, 0x0d, 0x0e, 0x0f, 0x08, 0x09, 0x0a, 0x0b
> + .byte 0x04, 0x05, 0x06, 0x07, 0x00, 0x01, 0x02, 0x03
> + .byte 0x1c, 0x1d, 0x1e, 0x1f, 0x18, 0x19, 0x1a, 0x1b
> + .byte 0x14, 0x15, 0x16, 0x17, 0x10, 0x11, 0x12, 0x13
> + .byte 0x2c, 0x2d, 0x2e, 0x2f, 0x28, 0x29, 0x2a, 0x2b
> + .byte 0x24, 0x25, 0x26, 0x27, 0x20, 0x21, 0x22, 0x23
> + .byte 0x3c, 0x3d, 0x3e, 0x3f, 0x38, 0x39, 0x3a, 0x3b
> + .byte 0x34, 0x35, 0x36, 0x37, 0x30, 0x31, 0x32, 0x33
> + .byte 0x4c, 0x4d, 0x4e, 0x4f, 0x48, 0x49, 0x4a, 0x4b
> + .byte 0x44, 0x45, 0x46, 0x47, 0x40, 0x41, 0x42, 0x43
> + .byte 0x5c, 0x5d, 0x5e, 0x5f, 0x58, 0x59, 0x5a, 0x5b
> + .byte 0x54, 0x55, 0x56, 0x57, 0x50, 0x51, 0x52, 0x53
> + .byte 0x6c, 0x6d, 0x6e, 0x6f, 0x68, 0x69, 0x6a, 0x6b
> + .byte 0x64, 0x65, 0x66, 0x67, 0x60, 0x61, 0x62, 0x63
> + .byte 0x7c, 0x7d, 0x7e, 0x7f, 0x78, 0x79, 0x7a, 0x7b
> + .byte 0x74, 0x75, 0x76, 0x77, 0x70, 0x71, 0x72, 0x73
> + .byte 0x8c, 0x8d, 0x8e, 0x8f, 0x88, 0x89, 0x8a, 0x8b
> + .byte 0x84, 0x85, 0x86, 0x87, 0x80, 0x81, 0x82, 0x83
> + .byte 0x9c, 0x9d, 0x9e, 0x9f, 0x98, 0x99, 0x9a, 0x9b
> + .byte 0x94, 0x95, 0x96, 0x97, 0x90, 0x91, 0x92, 0x93
> + .byte 0xac, 0xad, 0xae, 0xaf, 0xa8, 0xa9, 0xaa, 0xab
> + .byte 0xa4, 0xa5, 0xa6, 0xa7, 0xa0, 0xa1, 0xa2, 0xa3
> + .byte 0xbc, 0xbd, 0xbe, 0xbf, 0xb8, 0xb9, 0xba, 0xbb
> + .byte 0xb4, 0xb5, 0xb6, 0xb7, 0xb0, 0xb1, 0xb2, 0xb3
> + .byte 0xcc, 0xcd, 0xce, 0xcf, 0xc8, 0xc9, 0xca, 0xcb
> + .byte 0xc4, 0xc5, 0xc6, 0xc7, 0xc0, 0xc1, 0xc2, 0xc3
> + .byte 0xdc, 0xdd, 0xde, 0xdf, 0xd8, 0xd9, 0xda, 0xdb
> + .byte 0xd4, 0xd5, 0xd6, 0xd7, 0xd0, 0xd1, 0xd2, 0xd3
> + .byte 0xec, 0xed, 0xee, 0xef, 0xe8, 0xe9, 0xea, 0xeb
> + .byte 0xe4, 0xe5, 0xe6, 0xe7, 0xe0, 0xe1, 0xe2, 0xe3
> + .byte 0xfc, 0xfd, 0xfe, 0xff, 0xf8, 0xf9, 0xfa, 0xfb
> + .byte 0xf4, 0xf5, 0xf6, 0xf7, 0xf0, 0xf1, 0xf2, 0xf3
> +
> +.Lle128_inc:
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x0d, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x0f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> + .byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
> diff --git a/arch/arm64/crypto/sm4-sve-ce-glue.c b/arch/arm64/crypto/sm4-sve-ce-glue.c
> new file mode 100644
> index 000000000000..fc797b72b5f0
> --- /dev/null
> +++ b/arch/arm64/crypto/sm4-sve-ce-glue.c
> @@ -0,0 +1,332 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/*
> + * SM4 Cipher Algorithm, using ARMv9 Crypto Extensions with SVE2
> + * as specified in
> + * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-10.html
> + *
> + * Copyright (C) 2022, Alibaba Group.
> + * Copyright (C) 2022 Tianjia Zhang <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/crypto.h>
> +#include <linux/kernel.h>
> +#include <linux/cpufeature.h>
> +#include <asm/neon.h>
> +#include <asm/simd.h>
> +#include <crypto/internal/simd.h>
> +#include <crypto/internal/skcipher.h>
> +#include <crypto/sm4.h>
> +#include "sm4-ce.h"
> +
> +asmlinkage void sm4_sve_ce_crypt(const u32 *rkey, u8 *dst,
> + const u8 *src, unsigned int nblocks);
> +asmlinkage void sm4_sve_ce_cbc_dec(const u32 *rkey_dec, u8 *dst,
> + const u8 *src, u8 *iv,
> + unsigned int nblocks);
> +asmlinkage void sm4_sve_ce_cfb_dec(const u32 *rkey_enc, u8 *dst,
> + const u8 *src, u8 *iv,
> + unsigned int nblocks);
> +asmlinkage void sm4_sve_ce_ctr_crypt(const u32 *rkey_enc, u8 *dst,
> + const u8 *src, u8 *iv,
> + unsigned int nblocks);
> +asmlinkage unsigned int sm4_sve_get_vl(void);
> +
> +
> +static int sm4_setkey(struct crypto_skcipher *tfm, const u8 *key,
> + unsigned int key_len)
> +{
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> + if (key_len != SM4_KEY_SIZE)
> + return -EINVAL;
> +
> + kernel_neon_begin();
> + sm4_ce_expand_key(key, ctx->rkey_enc, ctx->rkey_dec,
> + crypto_sm4_fk, crypto_sm4_ck);
> + kernel_neon_end();
> +
> + return 0;
> +}
> +
> +static int ecb_crypt(struct skcipher_request *req, const u32 *rkey)
> +{
> + struct skcipher_walk walk;
> + unsigned int nbytes;
> + int err;
> +
> + err = skcipher_walk_virt(&walk, req, false);
> +
> + while ((nbytes = walk.nbytes) > 0) {
> + const u8 *src = walk.src.virt.addr;
> + u8 *dst = walk.dst.virt.addr;
> + unsigned int nblocks;
> +
> + nblocks = nbytes / SM4_BLOCK_SIZE;
> + if (nblocks) {
> + kernel_neon_begin();
> +
> + sm4_sve_ce_crypt(rkey, dst, src, nblocks);
> +
> + kernel_neon_end();
> + }
> +
> + err = skcipher_walk_done(&walk, nbytes % SM4_BLOCK_SIZE);
> + }
> +
> + return err;
> +}
> +
> +static int ecb_encrypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> + return ecb_crypt(req, ctx->rkey_enc);
> +}
> +
> +static int ecb_decrypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> + return ecb_crypt(req, ctx->rkey_dec);
> +}
> +
> +static int cbc_crypt(struct skcipher_request *req, const u32 *rkey,
> + void (*sm4_cbc_crypt)(const u32 *rkey, u8 *dst,
> + const u8 *src, u8 *iv, unsigned int nblocks))
> +{
> + struct skcipher_walk walk;
> + unsigned int nbytes;
> + int err;
> +
> + err = skcipher_walk_virt(&walk, req, false);
> +
> + while ((nbytes = walk.nbytes) > 0) {
> + const u8 *src = walk.src.virt.addr;
> + u8 *dst = walk.dst.virt.addr;
> + unsigned int nblocks;
> +
> + nblocks = nbytes / SM4_BLOCK_SIZE;
> + if (nblocks) {
> + kernel_neon_begin();
> +
> + sm4_cbc_crypt(rkey, dst, src, walk.iv, nblocks);
> +
> + kernel_neon_end();
> + }
> +
> + err = skcipher_walk_done(&walk, nbytes % SM4_BLOCK_SIZE);
> + }
> +
> + return err;
> +}
> +
> +static int cbc_encrypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> + return cbc_crypt(req, ctx->rkey_enc, sm4_ce_cbc_enc);
> +}
> +
> +static int cbc_decrypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> +
> + return cbc_crypt(req, ctx->rkey_dec, sm4_sve_ce_cbc_dec);
> +}
> +
> +static int cfb_crypt(struct skcipher_request *req,
> + void (*sm4_cfb_crypt)(const u32 *rkey, u8 *dst,
> + const u8 *src, u8 *iv, unsigned int nblocks))
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> + unsigned int nbytes;
> + int err;
> +
> + err = skcipher_walk_virt(&walk, req, false);
> +
> + while ((nbytes = walk.nbytes) > 0) {
> + const u8 *src = walk.src.virt.addr;
> + u8 *dst = walk.dst.virt.addr;
> + unsigned int nblocks;
> +
> + nblocks = nbytes / SM4_BLOCK_SIZE;
> + if (nblocks) {
> + kernel_neon_begin();
> +
> + sm4_cfb_crypt(ctx->rkey_enc, dst, src,
> + walk.iv, nblocks);
> +
> + kernel_neon_end();
> +
> + dst += nblocks * SM4_BLOCK_SIZE;
> + src += nblocks * SM4_BLOCK_SIZE;
> + nbytes -= nblocks * SM4_BLOCK_SIZE;
> + }
> +
> + /* tail */
> + if (walk.nbytes == walk.total && nbytes > 0) {
> + u8 keystream[SM4_BLOCK_SIZE];
> +
> + sm4_ce_crypt_block(ctx->rkey_enc, keystream, walk.iv);
> + crypto_xor_cpy(dst, src, keystream, nbytes);
> + nbytes = 0;
> + }
> +
> + err = skcipher_walk_done(&walk, nbytes);
> + }
> +
> + return err;
> +}
> +
> +static int cfb_encrypt(struct skcipher_request *req)
> +{
> + return cfb_crypt(req, sm4_ce_cfb_enc);
> +}
> +
> +static int cfb_decrypt(struct skcipher_request *req)
> +{
> + return cfb_crypt(req, sm4_sve_ce_cfb_dec);
> +}
> +
> +static int ctr_crypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct sm4_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> + unsigned int nbytes;
> + int err;
> +
> + err = skcipher_walk_virt(&walk, req, false);
> +
> + while ((nbytes = walk.nbytes) > 0) {
> + const u8 *src = walk.src.virt.addr;
> + u8 *dst = walk.dst.virt.addr;
> + unsigned int nblocks;
> +
> + nblocks = nbytes / SM4_BLOCK_SIZE;
> + if (nblocks) {
> + kernel_neon_begin();
> +
> + sm4_sve_ce_ctr_crypt(ctx->rkey_enc, dst, src,
> + walk.iv, nblocks);
> +
> + kernel_neon_end();
> +
> + dst += nblocks * SM4_BLOCK_SIZE;
> + src += nblocks * SM4_BLOCK_SIZE;
> + nbytes -= nblocks * SM4_BLOCK_SIZE;
> + }
> +
> + /* tail */
> + if (walk.nbytes == walk.total && nbytes > 0) {
> + u8 keystream[SM4_BLOCK_SIZE];
> +
> + sm4_ce_crypt_block(ctx->rkey_enc, keystream, walk.iv);
> + crypto_inc(walk.iv, SM4_BLOCK_SIZE);
> + crypto_xor_cpy(dst, src, keystream, nbytes);
> + nbytes = 0;
> + }
> +
> + err = skcipher_walk_done(&walk, nbytes);
> + }
> +
> + return err;
> +}
> +
> +static struct skcipher_alg sm4_algs[] = {
> + {
> + .base = {
> + .cra_name = "ecb(sm4)",
> + .cra_driver_name = "ecb-sm4-sve-ce",
> + .cra_priority = 500,
> + .cra_blocksize = SM4_BLOCK_SIZE,
> + .cra_ctxsize = sizeof(struct sm4_ctx),
> + .cra_module = THIS_MODULE,
> + },
> + .min_keysize = SM4_KEY_SIZE,
> + .max_keysize = SM4_KEY_SIZE,
> + .setkey = sm4_setkey,
> + .encrypt = ecb_encrypt,
> + .decrypt = ecb_decrypt,
> + }, {
> + .base = {
> + .cra_name = "cbc(sm4)",
> + .cra_driver_name = "cbc-sm4-sve-ce",
> + .cra_priority = 500,
> + .cra_blocksize = SM4_BLOCK_SIZE,
> + .cra_ctxsize = sizeof(struct sm4_ctx),
> + .cra_module = THIS_MODULE,
> + },
> + .min_keysize = SM4_KEY_SIZE,
> + .max_keysize = SM4_KEY_SIZE,
> + .ivsize = SM4_BLOCK_SIZE,
> + .setkey = sm4_setkey,
> + .encrypt = cbc_encrypt,
> + .decrypt = cbc_decrypt,
> + }, {
> + .base = {
> + .cra_name = "cfb(sm4)",
> + .cra_driver_name = "cfb-sm4-sve-ce",
> + .cra_priority = 500,
> + .cra_blocksize = 1,
> + .cra_ctxsize = sizeof(struct sm4_ctx),
> + .cra_module = THIS_MODULE,
> + },
> + .min_keysize = SM4_KEY_SIZE,
> + .max_keysize = SM4_KEY_SIZE,
> + .ivsize = SM4_BLOCK_SIZE,
> + .chunksize = SM4_BLOCK_SIZE,
> + .setkey = sm4_setkey,
> + .encrypt = cfb_encrypt,
> + .decrypt = cfb_decrypt,
> + }, {
> + .base = {
> + .cra_name = "ctr(sm4)",
> + .cra_driver_name = "ctr-sm4-sve-ce",
> + .cra_priority = 500,
> + .cra_blocksize = 1,
> + .cra_ctxsize = sizeof(struct sm4_ctx),
> + .cra_module = THIS_MODULE,
> + },
> + .min_keysize = SM4_KEY_SIZE,
> + .max_keysize = SM4_KEY_SIZE,
> + .ivsize = SM4_BLOCK_SIZE,
> + .chunksize = SM4_BLOCK_SIZE,
> + .setkey = sm4_setkey,
> + .encrypt = ctr_crypt,
> + .decrypt = ctr_crypt,
> + }
> +};
> +
> +static int __init sm4_sve_ce_init(void)
> +{
> + if (sm4_sve_get_vl() <= 16)
> + return -ENODEV;
> +
> + return crypto_register_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
> +}
> +
> +static void __exit sm4_sve_ce_exit(void)
> +{
> + crypto_unregister_skciphers(sm4_algs, ARRAY_SIZE(sm4_algs));
> +}
> +
> +module_cpu_feature_match(SVESM4, sm4_sve_ce_init);
> +module_exit(sm4_sve_ce_exit);
> +
> +MODULE_DESCRIPTION("SM4 ECB/CBC/CFB/CTR using ARMv9 Crypto Extensions with SVE2");
> +MODULE_ALIAS_CRYPTO("sm4-sve-ce");
> +MODULE_ALIAS_CRYPTO("sm4");
> +MODULE_ALIAS_CRYPTO("ecb(sm4)");
> +MODULE_ALIAS_CRYPTO("cbc(sm4)");
> +MODULE_ALIAS_CRYPTO("cfb(sm4)");
> +MODULE_ALIAS_CRYPTO("ctr(sm4)");
> +MODULE_AUTHOR("Tianjia Zhang <[email protected]>");
> +MODULE_LICENSE("GPL v2");
> --
> 2.24.3 (Apple Git-128)
>

2022-09-26 17:48:38

by Mark Brown

[permalink] [raw]
Subject: Re: [PATCH 16/16] crypto: arm64/sm4 - add ARMv9 SVE cryptography acceleration implementation

On Mon, Sep 26, 2022 at 12:02:04PM +0200, Ard Biesheuvel wrote:

> Given that we currently do not support the use of SVE in kernel mode,
> this patch cannot be accepted at this time (but the rest of the series
> looks reasonable to me, although I have only skimmed over the patches)

> In view of the disappointing benchmark results below, I don't think
> this is worth the hassle at the moment. If we can find a case where
> using SVE in kernel mode truly makes a [favorable] difference, we can
> revisit this, but not without a thorough analysis of the impact it
> will have to support SVE in the kernel. Also, the fact that SVE may

The kernel code doesn't really distinguish between FPSIMD and SVE in
terms of state management, and with the sharing of the V and Z registers
the architecture is very similar too so it shouldn't be too much hassle,
the only thing we should need is some management for the VL when
starting kernel mode SVE (probably just setting the maximum VL as a
first pass).

The current code should *work* and on a system with only a single VL
supported it'd be equivalent since setting the VL is a noop, it'd just
mean that any kernel mode SVE would end up using whatever the last VL
set on the PE happened to be in which could result in inconsistent
performance.

> also cover cryptographic extensions does not necessarily imply that a
> micro-architecture will perform those crypto transformations in
> parallel and so the performance may be the same even if VL > 128.

Indeed, though so long as the performance is comparable I guess it
doesn't really hurt - if we run into situations where for some
implementations SVE performs worse then we'd need to do something more
complicated than just using SVE if it's available but...

> In summary, please drop this patch for now, and once there are more
> encouraging performance numbers, please resubmit it as part of a
> series that explicitly enables SVE in kernel mode on arm64, and
> documents the requirements and constraints.

...in any case as you say until there are cases where SVE does better
for some in kernel use case we probably just shouldn't merge things.

Having said that I have been tempted to put together a branch which has
a kernel_sve_begin() implementation and collects proposed algorithm
implementations so they're there for people to experiment with as new
hardware becomes available. There's clearly interest in trying to use
SVE in kernel and it makes sense to try to avoid common pitfalls and
reduce duplication of effort.

A couple of very minor comments on the patch:

> > +config CRYPTO_SM4_ARM64_SVE_CE_BLK
> > + tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (ARMv9 cryptography
> +acceleration with SVE2)"
> > + depends on KERNEL_MODE_NEON
> > + select CRYPTO_SKCIPHER
> > + select CRYPTO_SM4
> > + select CRYPTO_SM4_ARM64_CE_BLK
> > + help

Our current baseline binutils version requirement predates SVE support
so we'd either need to manually encode all SVE instructions used or add
suitable dependency. The dependency seems a lot more reasonable here,
and we could require a new enough version to avoid the manual encoding
that is done in the patch (though I've not checked how new a version
that'd end up requiring, it might be unreasonable so perhaps just
depending on binutils having basic SVE support and continuing with the
manual encoding might be more helpful).

> > +.macro sm4e, vd, vn
> > + .inst 0xcec08400 | (.L\vn << 5) | .L\vd
> > +.endm

For any manual encodings that do get left it'd be good to note the
binutils and LLVM versions which support the instruction so we can
hopefully at some point switch to assembling them normally.

> > +static int __init sm4_sve_ce_init(void)
> > +{
> > + if (sm4_sve_get_vl() <= 16)
> > + return -ENODEV;

I'm not clear what this check is attempting to guard against - what's
the issue with larger VLs?

If it is needed then we already have a sve_get_vl() in the core kernel
which we should probably be making available to modules rather than
having them open code something (eg, making it a static inline rather
than putting it in asm).


Attachments:
(No filename) (4.15 kB)
signature.asc (499.00 B)
Download all attachments

2022-09-27 04:33:12

by Tianjia Zhang

[permalink] [raw]
Subject: Re: [PATCH 16/16] crypto: arm64/sm4 - add ARMv9 SVE cryptography acceleration implementation

Hi Mark,

On 9/27/22 1:14 AM, Mark Brown wrote:
> On Mon, Sep 26, 2022 at 12:02:04PM +0200, Ard Biesheuvel wrote:
>
>> Given that we currently do not support the use of SVE in kernel mode,
>> this patch cannot be accepted at this time (but the rest of the series
>> looks reasonable to me, although I have only skimmed over the patches)
>
>> In view of the disappointing benchmark results below, I don't think
>> this is worth the hassle at the moment. If we can find a case where
>> using SVE in kernel mode truly makes a [favorable] difference, we can
>> revisit this, but not without a thorough analysis of the impact it
>> will have to support SVE in the kernel. Also, the fact that SVE may
>
> The kernel code doesn't really distinguish between FPSIMD and SVE in
> terms of state management, and with the sharing of the V and Z registers
> the architecture is very similar too so it shouldn't be too much hassle,
> the only thing we should need is some management for the VL when
> starting kernel mode SVE (probably just setting the maximum VL as a
> first pass).
>
> The current code should *work* and on a system with only a single VL
> supported it'd be equivalent since setting the VL is a noop, it'd just
> mean that any kernel mode SVE would end up using whatever the last VL
> set on the PE happened to be in which could result in inconsistent
> performance.
>
>> also cover cryptographic extensions does not necessarily imply that a
>> micro-architecture will perform those crypto transformations in
>> parallel and so the performance may be the same even if VL > 128.
>
> Indeed, though so long as the performance is comparable I guess it
> doesn't really hurt - if we run into situations where for some
> implementations SVE performs worse then we'd need to do something more
> complicated than just using SVE if it's available but...
>
>> In summary, please drop this patch for now, and once there are more
>> encouraging performance numbers, please resubmit it as part of a
>> series that explicitly enables SVE in kernel mode on arm64, and
>> documents the requirements and constraints.
>
> ...in any case as you say until there are cases where SVE does better
> for some in kernel use case we probably just shouldn't merge things.
>
> Having said that I have been tempted to put together a branch which has
> a kernel_sve_begin() implementation and collects proposed algorithm
> implementations so they're there for people to experiment with as new
> hardware becomes available. There's clearly interest in trying to use
> SVE in kernel and it makes sense to try to avoid common pitfalls and
> reduce duplication of effort.
>

Your reply helped me a lot, I did encounter problems when using qemu VL
larger than 128-bit environment, but I also tested it with the pure
user-mode library libgcrypt, it seems to be normal, maybe in 128-bit
It's just a coincidence that it works fine in the physical machine.

I am looking forward to your experimental branch, and I believe that
there will be breakthroughs in hardware in the near future.

> A couple of very minor comments on the patch:
>
>>> +config CRYPTO_SM4_ARM64_SVE_CE_BLK
>>> + tristate "Ciphers: SM4, modes: ECB/CBC/CFB/CTR (ARMv9 cryptography
>> +acceleration with SVE2)"
>>> + depends on KERNEL_MODE_NEON
>>> + select CRYPTO_SKCIPHER
>>> + select CRYPTO_SM4
>>> + select CRYPTO_SM4_ARM64_CE_BLK
>>> + help
>
> Our current baseline binutils version requirement predates SVE support
> so we'd either need to manually encode all SVE instructions used or add
> suitable dependency. The dependency seems a lot more reasonable here,
> and we could require a new enough version to avoid the manual encoding
> that is done in the patch (though I've not checked how new a version
> that'd end up requiring, it might be unreasonable so perhaps just
> depending on binutils having basic SVE support and continuing with the
> manual encoding might be more helpful).
>
>>> +.macro sm4e, vd, vn
>>> + .inst 0xcec08400 | (.L\vn << 5) | .L\vd
>>> +.endm
>
> For any manual encodings that do get left it'd be good to note the
> binutils and LLVM versions which support the instruction so we can
> hopefully at some point switch to assembling them normally.
>
>>> +static int __init sm4_sve_ce_init(void)
>>> +{
>>> + if (sm4_sve_get_vl() <= 16)
>>> + return -ENODEV;
>
> I'm not clear what this check is attempting to guard against - what's
> the issue with larger VLs?

Since there is no physical environment, this check is based on my naive
assumption that the performance when VL is 256-bit should theoretically
be twice that of 128-bit, because SVE needs to handle more complex data
shifting operations and CTR incrementing operations, so When VL is
greater than or equal to 256 bits, the use of SVE will bring performance
improvement, otherwise it is a suitable choice to degenerate to CE.

Now it seems that this assumption itself is not valid, I will drop
this patch first.

>
> If it is needed then we already have a sve_get_vl() in the core kernel
> which we should probably be making available to modules rather than
> having them open code something (eg, making it a static inline rather
> than putting it in asm).

Yes, I agree, exporting sve_get_vl() to the module is the more
appropriate approach.

Best regards,
Tianjia

2022-09-27 04:33:55

by Tianjia Zhang

[permalink] [raw]
Subject: Re: [PATCH 16/16] crypto: arm64/sm4 - add ARMv9 SVE cryptography acceleration implementation

Hi Ard,

On 9/26/22 6:02 PM, Ard Biesheuvel wrote:
> (cc Mark Brown)
>
> Hello Tianjia,
>
> On Mon, 26 Sept 2022 at 11:37, Tianjia Zhang
> <[email protected]> wrote:
>>
>> Scalable Vector Extension (SVE) is the next-generation SIMD extension for
>> arm64. SVE allows flexible vector length implementations with a range of
>> possible values in CPU implementations. The vector length can vary from a
>> minimum of 128 bits up to a maximum of 2048 bits, at 128-bit increments.
>> The SVE design guarantees that the same application can run on different
>> implementations that support SVE, without the need to recompile the code.
>>
>> SVE was originally introduced by ARMv8, and ARMv9 introduced SVE2 to
>> expand and improve it. Similar to the Crypto Extension supported by the
>> NEON instruction set for the algorithm, SVE also supports the similar
>> instructions, called cryptography acceleration instructions, but this is
>> also optional instruction set.
>>
>> This patch uses SM4 cryptography acceleration instructions and SVE2
>> instructions to optimize the SM4 algorithm for ECB/CBC/CFB/CTR modes.
>> Since the encryption of CBC/CFB cannot be parallelized, the Crypto
>> Extension instruction is used.
>>
>
> Given that we currently do not support the use of SVE in kernel mode,
> this patch cannot be accepted at this time (but the rest of the series
> looks reasonable to me, although I have only skimmed over the patches)
>
> In view of the disappointing benchmark results below, I don't think
> this is worth the hassle at the moment. If we can find a case where
> using SVE in kernel mode truly makes a [favorable] difference, we can
> revisit this, but not without a thorough analysis of the impact it
> will have to support SVE in the kernel. Also, the fact that SVE may
> also cover cryptographic extensions does not necessarily imply that a
> micro-architecture will perform those crypto transformations in
> parallel and so the performance may be the same even if VL > 128.
>
> In summary, please drop this patch for now, and once there are more
> encouraging performance numbers, please resubmit it as part of a
> series that explicitly enables SVE in kernel mode on arm64, and
> documents the requirements and constraints.
>
> I have cc'ed Mark who has been working on the SVE support., who might
> have something to add here as well.
>
> Thanks,
> Ard.
>
>

Thanks for your reply, the current performance of SVE is really
unsatisfactory. One reason is that the optimization of SVE needs to deal
with more and more complex data shifting operations, such as in CBC/CFB
mode, but also in CTR mode. needing more instruction to complete the
128-bit count increment, and the use of CE optimization does not have
these complications.

In addition, I naively thought that when the VL is 256-bit, the
performance will simply double compared to 128-bit. At present, this is
not the case. Maybe it is worth using SVE until there are significantly
improved performance data. I'll follow your advice and drop this
patch.

Best regards,
Tianjia