This series provides cryptographic implementations using the vector crypto
extensions[1] including:
1. AES cipher
2. AES with CBC/CTR/ECB/XTS block modes
3. ChaCha20 stream cipher
4. GHASH for GCM
5. SHA-224/256 and SHA-384/512 hash
6. SM3 hash
7. SM4 cipher
This patch set is based on Heiko Stuebner's work at:
Link: https://lore.kernel.org/all/[email protected]/
The implementations reuse the perl-asm scripts from OpenSSL[2] with some
changes adapting for the kernel crypto framework.
The perl-asm scripts generate the RISC-V RVV 1.0 and the vector crypto 1.0
instructions into `.S` files.
All changes pass the kernel run-time crypto self tests and the extra tests
with vector-crypto-enabled qemu.
Link: https://lists.gnu.org/archive/html/qemu-devel/2023-11/msg00281.html
This series depend on:
1. kernel riscv/for-next(6.7-rc1)
Link: https://github.com/linux-riscv/linux-riscv/commit/f352a28cc2fb4ee8d08c6a6362c9a861fcc84236
2. support kernel-mode vector
Link: https://lore.kernel.org/all/[email protected]/
Here is a branch on github applying with all dependent patches:
Link: https://github.com/JerryShih/linux/tree/dev/jerrys/vector-crypto-upstream-v4
And here is the previous v3 link:
Link: https://lore.kernel.org/all/[email protected]/
[1]
Link: https://github.com/riscv/riscv-crypto/blob/56ed7952d13eb5bdff92e2b522404668952f416d/doc/vector/riscv-crypto-spec-vector.adoc
[2]
Link: https://github.com/openssl/openssl/pull/21923
Updated patches (on current order): 4, 5, 6, 7, 8, 9, 10, 11
New patch: 3
Unchanged patch: 1, 2
Deleted patch: 3, 5 in v3
Changelog v4:
- Check the assembler capability for using the vector crypto asm
mnemonics.
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
- Revert the usage of simd skcipher interface for AES-CBC/CTR/ECB/XTS and
Chacha20.
Changelog v3:
- Use asm mnemonics for the instructions in RVV 1.0 extension.
- Use `SYM_TYPED_FUNC_START` for indirect-call asm symbols.
- Update aes xts_crypt() implementation.
- Update crypto function names with the prefix/suffix of `riscv64` or the
specific extensions to avoid the collision with functions in `crypto/`
or `lib/crypto/`.
Changelog v2:
- Do not turn on the RISC-V accelerated crypto kconfig options by
default.
- Assume RISC-V vector extension could support unaligned access in
kernel.
- Turn to use simd skcipher interface for AES-CBC/CTR/ECB/XTS and
Chacha20.
- Rename crypto file and driver names to make the most important
extension at first place.
Heiko Stuebner (2):
RISC-V: add helper function to read the vector VLEN
RISC-V: hook new crypto subdir into build-system
Jerry Shih (9):
RISC-V: add TOOLCHAIN_HAS_VECTOR_CRYPTO in kconfig
RISC-V: crypto: add Zvkned accelerated AES implementation
RISC-V: crypto: add accelerated AES-CBC/CTR/ECB/XTS implementations
RISC-V: crypto: add Zvkg accelerated GCM GHASH implementation
RISC-V: crypto: add Zvknha/b accelerated SHA224/256 implementations
RISC-V: crypto: add Zvknhb accelerated SHA384/512 implementations
RISC-V: crypto: add Zvksed accelerated SM4 implementation
RISC-V: crypto: add Zvksh accelerated SM3 implementation
RISC-V: crypto: add Zvkb accelerated ChaCha20 implementation
arch/riscv/Kbuild | 1 +
arch/riscv/Kconfig | 8 +
arch/riscv/crypto/Kconfig | 110 ++
arch/riscv/crypto/Makefile | 68 +
.../crypto/aes-riscv64-block-mode-glue.c | 459 +++++++
arch/riscv/crypto/aes-riscv64-glue.c | 137 ++
arch/riscv/crypto/aes-riscv64-glue.h | 18 +
.../crypto/aes-riscv64-zvkned-zvbb-zvkg.pl | 949 +++++++++++++
arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl | 415 ++++++
arch/riscv/crypto/aes-riscv64-zvkned.pl | 1199 +++++++++++++++++
arch/riscv/crypto/chacha-riscv64-glue.c | 109 ++
arch/riscv/crypto/chacha-riscv64-zvkb.pl | 321 +++++
arch/riscv/crypto/ghash-riscv64-glue.c | 175 +++
arch/riscv/crypto/ghash-riscv64-zvkg.pl | 100 ++
arch/riscv/crypto/sha256-riscv64-glue.c | 145 ++
.../sha256-riscv64-zvknha_or_zvknhb-zvkb.pl | 317 +++++
arch/riscv/crypto/sha512-riscv64-glue.c | 139 ++
.../crypto/sha512-riscv64-zvknhb-zvkb.pl | 265 ++++
arch/riscv/crypto/sm3-riscv64-glue.c | 124 ++
arch/riscv/crypto/sm3-riscv64-zvksh.pl | 227 ++++
arch/riscv/crypto/sm4-riscv64-glue.c | 121 ++
arch/riscv/crypto/sm4-riscv64-zvksed.pl | 268 ++++
arch/riscv/include/asm/vector.h | 11 +
crypto/Kconfig | 3 +
24 files changed, 5689 insertions(+)
create mode 100644 arch/riscv/crypto/Kconfig
create mode 100644 arch/riscv/crypto/Makefile
create mode 100644 arch/riscv/crypto/aes-riscv64-block-mode-glue.c
create mode 100644 arch/riscv/crypto/aes-riscv64-glue.c
create mode 100644 arch/riscv/crypto/aes-riscv64-glue.h
create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl
create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl
create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned.pl
create mode 100644 arch/riscv/crypto/chacha-riscv64-glue.c
create mode 100644 arch/riscv/crypto/chacha-riscv64-zvkb.pl
create mode 100644 arch/riscv/crypto/ghash-riscv64-glue.c
create mode 100644 arch/riscv/crypto/ghash-riscv64-zvkg.pl
create mode 100644 arch/riscv/crypto/sha256-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl
create mode 100644 arch/riscv/crypto/sha512-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl
create mode 100644 arch/riscv/crypto/sm3-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sm3-riscv64-zvksh.pl
create mode 100644 arch/riscv/crypto/sm4-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sm4-riscv64-zvksed.pl
--
2.28.0
LLVM main and binutils master now both fully support v1.0 of the RISC-V
vector crypto extensions. Check the assembler capability for using the
vector crypto asm mnemonics in kernel.
Co-developed-by: Eric Biggers <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
arch/riscv/Kconfig | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 0a03d72706b5..8647392ece0b 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -636,6 +636,14 @@ config TOOLCHAIN_NEEDS_OLD_ISA_SPEC
versions of clang and GCC to be passed to GAS, which has the same result
as passing zicsr and zifencei to -march.
+# This option indicates that the toolchain supports all v1.0 vector crypto
+# extensions, including Zvk*, Zvbb, and Zvbc. The LLVM added all of these at
+# once. The binutils added all except Zvkb, then added Zvkb. So we just check
+# for Zvkb.
+config TOOLCHAIN_HAS_VECTOR_CRYPTO
+ def_bool $(as-instr, .option arch$(comma) +zvkb)
+ depends on AS_HAS_OPTION_ARCH
+
config FPU
bool "FPU support"
default y
--
2.28.0
The AES implementation using the Zvkned vector crypto extension from
OpenSSL(openssl/openssl#21923).
Co-developed-by: Christoph Müllner <[email protected]>
Signed-off-by: Christoph Müllner <[email protected]>
Co-developed-by: Heiko Stuebner <[email protected]>
Signed-off-by: Heiko Stuebner <[email protected]>
Co-developed-by: Phoebe Chen <[email protected]>
Signed-off-by: Phoebe Chen <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
Changelog v4:
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
Changelog v3:
- Rename aes_setkey() to aes_setkey_zvkned().
- Rename riscv64_aes_setkey() to riscv64_aes_setkey_zvkned().
- Use aes generic software key expanding everywhere.
- Remove rv64i_zvkned_set_encrypt_key().
We still need to provide the decryption expanding key for the SW fallback
path which is not supported directly using zvkned extension. So, we turn
to use the pure generic software key expanding everywhere to simplify the
set_key flow.
- Use asm mnemonics for the instructions in RVV 1.0 extension.
Changelog v2:
- Do not turn on kconfig `AES_RISCV64` option by default.
- Turn to use `crypto_aes_ctx` structure for aes key.
- Use `Zvkned` extension for AES-128/256 key expanding.
- Export riscv64_aes_* symbols for other modules.
- Add `asmlinkage` qualifier for crypto asm function.
- Reorder structure riscv64_aes_alg_zvkned members initialization in
the order declared.
---
arch/riscv/crypto/Kconfig | 11 +
arch/riscv/crypto/Makefile | 11 +
arch/riscv/crypto/aes-riscv64-glue.c | 137 +++++++
arch/riscv/crypto/aes-riscv64-glue.h | 18 +
arch/riscv/crypto/aes-riscv64-zvkned.pl | 453 ++++++++++++++++++++++++
5 files changed, 630 insertions(+)
create mode 100644 arch/riscv/crypto/aes-riscv64-glue.c
create mode 100644 arch/riscv/crypto/aes-riscv64-glue.h
create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned.pl
diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
index 10d60edc0110..2a7c365f2a86 100644
--- a/arch/riscv/crypto/Kconfig
+++ b/arch/riscv/crypto/Kconfig
@@ -2,4 +2,15 @@
menu "Accelerated Cryptographic Algorithms for CPU (riscv)"
+config CRYPTO_AES_RISCV64
+ tristate "Ciphers: AES"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_ALGAPI
+ select CRYPTO_LIB_AES
+ help
+ Block ciphers: AES cipher algorithms (FIPS-197)
+
+ Architecture: riscv64 using:
+ - Zvkned vector crypto extension
+
endmenu
diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile
index b3b6332c9f6d..90ca91d8df26 100644
--- a/arch/riscv/crypto/Makefile
+++ b/arch/riscv/crypto/Makefile
@@ -2,3 +2,14 @@
#
# linux/arch/riscv/crypto/Makefile
#
+
+obj-$(CONFIG_CRYPTO_AES_RISCV64) += aes-riscv64.o
+aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o
+
+quiet_cmd_perlasm = PERLASM $@
+ cmd_perlasm = $(PERL) $(<) void $(@)
+
+$(obj)/aes-riscv64-zvkned.S: $(src)/aes-riscv64-zvkned.pl
+ $(call cmd,perlasm)
+
+clean-files += aes-riscv64-zvkned.S
diff --git a/arch/riscv/crypto/aes-riscv64-glue.c b/arch/riscv/crypto/aes-riscv64-glue.c
new file mode 100644
index 000000000000..f29898c25652
--- /dev/null
+++ b/arch/riscv/crypto/aes-riscv64-glue.c
@@ -0,0 +1,137 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Port of the OpenSSL AES implementation for RISC-V
+ *
+ * Copyright (C) 2023 VRULL GmbH
+ * Author: Heiko Stuebner <[email protected]>
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Author: Jerry Shih <[email protected]>
+ */
+
+#include <asm/simd.h>
+#include <asm/vector.h>
+#include <crypto/aes.h>
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/simd.h>
+#include <linux/crypto.h>
+#include <linux/linkage.h>
+#include <linux/module.h>
+#include <linux/types.h>
+
+#include "aes-riscv64-glue.h"
+
+/* aes cipher using zvkned vector crypto extension */
+asmlinkage void rv64i_zvkned_encrypt(const u8 *in, u8 *out,
+ const struct crypto_aes_ctx *key);
+asmlinkage void rv64i_zvkned_decrypt(const u8 *in, u8 *out,
+ const struct crypto_aes_ctx *key);
+
+int riscv64_aes_setkey_zvkned(struct crypto_aes_ctx *ctx, const u8 *key,
+ unsigned int keylen)
+{
+ int ret;
+
+ ret = aes_check_keylen(keylen);
+ if (ret < 0)
+ return -EINVAL;
+
+ /*
+ * The RISC-V AES vector crypto key expanding doesn't support AES-192.
+ * So, we use the generic software key expanding here for all cases.
+ */
+ return aes_expandkey(ctx, key, keylen);
+}
+EXPORT_SYMBOL(riscv64_aes_setkey_zvkned);
+
+void riscv64_aes_encrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst,
+ const u8 *src)
+{
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ rv64i_zvkned_encrypt(src, dst, ctx);
+ kernel_vector_end();
+ } else {
+ aes_encrypt(ctx, dst, src);
+ }
+}
+EXPORT_SYMBOL(riscv64_aes_encrypt_zvkned);
+
+void riscv64_aes_decrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst,
+ const u8 *src)
+{
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ rv64i_zvkned_decrypt(src, dst, ctx);
+ kernel_vector_end();
+ } else {
+ aes_decrypt(ctx, dst, src);
+ }
+}
+EXPORT_SYMBOL(riscv64_aes_decrypt_zvkned);
+
+static int aes_setkey_zvkned(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ return riscv64_aes_setkey_zvkned(ctx, key, keylen);
+}
+
+static void aes_encrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
+{
+ const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ riscv64_aes_encrypt_zvkned(ctx, dst, src);
+}
+
+static void aes_decrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
+{
+ const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ riscv64_aes_decrypt_zvkned(ctx, dst, src);
+}
+
+static struct crypto_alg riscv64_aes_alg_zvkned = {
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crypto_aes_ctx),
+ .cra_priority = 300,
+ .cra_name = "aes",
+ .cra_driver_name = "aes-riscv64-zvkned",
+ .cra_cipher = {
+ .cia_min_keysize = AES_MIN_KEY_SIZE,
+ .cia_max_keysize = AES_MAX_KEY_SIZE,
+ .cia_setkey = aes_setkey_zvkned,
+ .cia_encrypt = aes_encrypt_zvkned,
+ .cia_decrypt = aes_decrypt_zvkned,
+ },
+ .cra_module = THIS_MODULE,
+};
+
+static inline bool check_aes_ext(void)
+{
+ return riscv_isa_extension_available(NULL, ZVKNED) &&
+ riscv_vector_vlen() >= 128;
+}
+
+static int __init riscv64_aes_mod_init(void)
+{
+ if (check_aes_ext())
+ return crypto_register_alg(&riscv64_aes_alg_zvkned);
+
+ return -ENODEV;
+}
+
+static void __exit riscv64_aes_mod_fini(void)
+{
+ crypto_unregister_alg(&riscv64_aes_alg_zvkned);
+}
+
+module_init(riscv64_aes_mod_init);
+module_exit(riscv64_aes_mod_fini);
+
+MODULE_DESCRIPTION("AES (RISC-V accelerated)");
+MODULE_AUTHOR("Heiko Stuebner <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("aes");
diff --git a/arch/riscv/crypto/aes-riscv64-glue.h b/arch/riscv/crypto/aes-riscv64-glue.h
new file mode 100644
index 000000000000..2b544125091e
--- /dev/null
+++ b/arch/riscv/crypto/aes-riscv64-glue.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef AES_RISCV64_GLUE_H
+#define AES_RISCV64_GLUE_H
+
+#include <crypto/aes.h>
+#include <linux/types.h>
+
+int riscv64_aes_setkey_zvkned(struct crypto_aes_ctx *ctx, const u8 *key,
+ unsigned int keylen);
+
+void riscv64_aes_encrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst,
+ const u8 *src);
+
+void riscv64_aes_decrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst,
+ const u8 *src);
+
+#endif /* AES_RISCV64_GLUE_H */
diff --git a/arch/riscv/crypto/aes-riscv64-zvkned.pl b/arch/riscv/crypto/aes-riscv64-zvkned.pl
new file mode 100644
index 000000000000..583e87912e5d
--- /dev/null
+++ b/arch/riscv/crypto/aes-riscv64-zvkned.pl
@@ -0,0 +1,453 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Christoph Müllner <[email protected]>
+# Copyright (c) 2023, Phoebe Chen <[email protected]>
+# Copyright (c) 2023, Jerry Shih <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# - RV64I
+# - RISC-V Vector ('V') with VLEN >= 128
+# - RISC-V Vector AES block cipher extension ('Zvkned')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+.text
+.option arch, +zvkned
+___
+
+my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7,
+ $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15,
+ $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23,
+ $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31,
+) = map("v$_",(0..31));
+
+{
+################################################################################
+# void rv64i_zvkned_encrypt(const unsigned char *in, unsigned char *out,
+# const AES_KEY *key);
+my ($INP, $OUTP, $KEYP) = ("a0", "a1", "a2");
+my ($T0) = ("t0");
+my ($KEY_LEN) = ("a3");
+
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkned_encrypt
+.type rv64i_zvkned_encrypt,\@function
+rv64i_zvkned_encrypt:
+ # Load key length.
+ lwu $KEY_LEN, 480($KEYP)
+
+ # Get proper routine for key length.
+ li $T0, 32
+ beq $KEY_LEN, $T0, L_enc_256
+ li $T0, 24
+ beq $KEY_LEN, $T0, L_enc_192
+ li $T0, 16
+ beq $KEY_LEN, $T0, L_enc_128
+
+ j L_fail_m2
+.size rv64i_zvkned_encrypt,.-rv64i_zvkned_encrypt
+___
+
+$code .= <<___;
+.p2align 3
+L_enc_128:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ vle32.v $V1, ($INP)
+
+ vle32.v $V10, ($KEYP)
+ vaesz.vs $V1, $V10 # with round key w[ 0, 3]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ vaesem.vs $V1, $V11 # with round key w[ 4, 7]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ vaesem.vs $V1, $V12 # with round key w[ 8,11]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+ vaesem.vs $V1, $V13 # with round key w[12,15]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V14, ($KEYP)
+ vaesem.vs $V1, $V14 # with round key w[16,19]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V15, ($KEYP)
+ vaesem.vs $V1, $V15 # with round key w[20,23]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V16, ($KEYP)
+ vaesem.vs $V1, $V16 # with round key w[24,27]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V17, ($KEYP)
+ vaesem.vs $V1, $V17 # with round key w[28,31]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V18, ($KEYP)
+ vaesem.vs $V1, $V18 # with round key w[32,35]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V19, ($KEYP)
+ vaesem.vs $V1, $V19 # with round key w[36,39]
+ addi $KEYP, $KEYP, 16
+ vle32.v $V20, ($KEYP)
+ vaesef.vs $V1, $V20 # with round key w[40,43]
+
+ vse32.v $V1, ($OUTP)
+
+ ret
+.size L_enc_128,.-L_enc_128
+___
+
+$code .= <<___;
+.p2align 3
+L_enc_192:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ vle32.v $V1, ($INP)
+
+ vle32.v $V10, ($KEYP)
+ vaesz.vs $V1, $V10
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ vaesem.vs $V1, $V11
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ vaesem.vs $V1, $V12
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+ vaesem.vs $V1, $V13
+ addi $KEYP, $KEYP, 16
+ vle32.v $V14, ($KEYP)
+ vaesem.vs $V1, $V14
+ addi $KEYP, $KEYP, 16
+ vle32.v $V15, ($KEYP)
+ vaesem.vs $V1, $V15
+ addi $KEYP, $KEYP, 16
+ vle32.v $V16, ($KEYP)
+ vaesem.vs $V1, $V16
+ addi $KEYP, $KEYP, 16
+ vle32.v $V17, ($KEYP)
+ vaesem.vs $V1, $V17
+ addi $KEYP, $KEYP, 16
+ vle32.v $V18, ($KEYP)
+ vaesem.vs $V1, $V18
+ addi $KEYP, $KEYP, 16
+ vle32.v $V19, ($KEYP)
+ vaesem.vs $V1, $V19
+ addi $KEYP, $KEYP, 16
+ vle32.v $V20, ($KEYP)
+ vaesem.vs $V1, $V20
+ addi $KEYP, $KEYP, 16
+ vle32.v $V21, ($KEYP)
+ vaesem.vs $V1, $V21
+ addi $KEYP, $KEYP, 16
+ vle32.v $V22, ($KEYP)
+ vaesef.vs $V1, $V22
+
+ vse32.v $V1, ($OUTP)
+ ret
+.size L_enc_192,.-L_enc_192
+___
+
+$code .= <<___;
+.p2align 3
+L_enc_256:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ vle32.v $V1, ($INP)
+
+ vle32.v $V10, ($KEYP)
+ vaesz.vs $V1, $V10
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ vaesem.vs $V1, $V11
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ vaesem.vs $V1, $V12
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+ vaesem.vs $V1, $V13
+ addi $KEYP, $KEYP, 16
+ vle32.v $V14, ($KEYP)
+ vaesem.vs $V1, $V14
+ addi $KEYP, $KEYP, 16
+ vle32.v $V15, ($KEYP)
+ vaesem.vs $V1, $V15
+ addi $KEYP, $KEYP, 16
+ vle32.v $V16, ($KEYP)
+ vaesem.vs $V1, $V16
+ addi $KEYP, $KEYP, 16
+ vle32.v $V17, ($KEYP)
+ vaesem.vs $V1, $V17
+ addi $KEYP, $KEYP, 16
+ vle32.v $V18, ($KEYP)
+ vaesem.vs $V1, $V18
+ addi $KEYP, $KEYP, 16
+ vle32.v $V19, ($KEYP)
+ vaesem.vs $V1, $V19
+ addi $KEYP, $KEYP, 16
+ vle32.v $V20, ($KEYP)
+ vaesem.vs $V1, $V20
+ addi $KEYP, $KEYP, 16
+ vle32.v $V21, ($KEYP)
+ vaesem.vs $V1, $V21
+ addi $KEYP, $KEYP, 16
+ vle32.v $V22, ($KEYP)
+ vaesem.vs $V1, $V22
+ addi $KEYP, $KEYP, 16
+ vle32.v $V23, ($KEYP)
+ vaesem.vs $V1, $V23
+ addi $KEYP, $KEYP, 16
+ vle32.v $V24, ($KEYP)
+ vaesef.vs $V1, $V24
+
+ vse32.v $V1, ($OUTP)
+ ret
+.size L_enc_256,.-L_enc_256
+___
+
+################################################################################
+# void rv64i_zvkned_decrypt(const unsigned char *in, unsigned char *out,
+# const AES_KEY *key);
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkned_decrypt
+.type rv64i_zvkned_decrypt,\@function
+rv64i_zvkned_decrypt:
+ # Load key length.
+ lwu $KEY_LEN, 480($KEYP)
+
+ # Get proper routine for key length.
+ li $T0, 32
+ beq $KEY_LEN, $T0, L_dec_256
+ li $T0, 24
+ beq $KEY_LEN, $T0, L_dec_192
+ li $T0, 16
+ beq $KEY_LEN, $T0, L_dec_128
+
+ j L_fail_m2
+.size rv64i_zvkned_decrypt,.-rv64i_zvkned_decrypt
+___
+
+$code .= <<___;
+.p2align 3
+L_dec_128:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ vle32.v $V1, ($INP)
+
+ addi $KEYP, $KEYP, 160
+ vle32.v $V20, ($KEYP)
+ vaesz.vs $V1, $V20 # with round key w[40,43]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V19, ($KEYP)
+ vaesdm.vs $V1, $V19 # with round key w[36,39]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V18, ($KEYP)
+ vaesdm.vs $V1, $V18 # with round key w[32,35]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V17, ($KEYP)
+ vaesdm.vs $V1, $V17 # with round key w[28,31]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V16, ($KEYP)
+ vaesdm.vs $V1, $V16 # with round key w[24,27]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V15, ($KEYP)
+ vaesdm.vs $V1, $V15 # with round key w[20,23]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V14, ($KEYP)
+ vaesdm.vs $V1, $V14 # with round key w[16,19]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V13, ($KEYP)
+ vaesdm.vs $V1, $V13 # with round key w[12,15]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V12, ($KEYP)
+ vaesdm.vs $V1, $V12 # with round key w[ 8,11]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V11, ($KEYP)
+ vaesdm.vs $V1, $V11 # with round key w[ 4, 7]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V10, ($KEYP)
+ vaesdf.vs $V1, $V10 # with round key w[ 0, 3]
+
+ vse32.v $V1, ($OUTP)
+
+ ret
+.size L_dec_128,.-L_dec_128
+___
+
+$code .= <<___;
+.p2align 3
+L_dec_192:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ vle32.v $V1, ($INP)
+
+ addi $KEYP, $KEYP, 192
+ vle32.v $V22, ($KEYP)
+ vaesz.vs $V1, $V22 # with round key w[48,51]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V21, ($KEYP)
+ vaesdm.vs $V1, $V21 # with round key w[44,47]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V20, ($KEYP)
+ vaesdm.vs $V1, $V20 # with round key w[40,43]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V19, ($KEYP)
+ vaesdm.vs $V1, $V19 # with round key w[36,39]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V18, ($KEYP)
+ vaesdm.vs $V1, $V18 # with round key w[32,35]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V17, ($KEYP)
+ vaesdm.vs $V1, $V17 # with round key w[28,31]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V16, ($KEYP)
+ vaesdm.vs $V1, $V16 # with round key w[24,27]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V15, ($KEYP)
+ vaesdm.vs $V1, $V15 # with round key w[20,23]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V14, ($KEYP)
+ vaesdm.vs $V1, $V14 # with round key w[16,19]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V13, ($KEYP)
+ vaesdm.vs $V1, $V13 # with round key w[12,15]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V12, ($KEYP)
+ vaesdm.vs $V1, $V12 # with round key w[ 8,11]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V11, ($KEYP)
+ vaesdm.vs $V1, $V11 # with round key w[ 4, 7]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V10, ($KEYP)
+ vaesdf.vs $V1, $V10 # with round key w[ 0, 3]
+
+ vse32.v $V1, ($OUTP)
+
+ ret
+.size L_dec_192,.-L_dec_192
+___
+
+$code .= <<___;
+.p2align 3
+L_dec_256:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ vle32.v $V1, ($INP)
+
+ addi $KEYP, $KEYP, 224
+ vle32.v $V24, ($KEYP)
+ vaesz.vs $V1, $V24 # with round key w[56,59]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V23, ($KEYP)
+ vaesdm.vs $V1, $V23 # with round key w[52,55]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V22, ($KEYP)
+ vaesdm.vs $V1, $V22 # with round key w[48,51]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V21, ($KEYP)
+ vaesdm.vs $V1, $V21 # with round key w[44,47]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V20, ($KEYP)
+ vaesdm.vs $V1, $V20 # with round key w[40,43]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V19, ($KEYP)
+ vaesdm.vs $V1, $V19 # with round key w[36,39]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V18, ($KEYP)
+ vaesdm.vs $V1, $V18 # with round key w[32,35]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V17, ($KEYP)
+ vaesdm.vs $V1, $V17 # with round key w[28,31]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V16, ($KEYP)
+ vaesdm.vs $V1, $V16 # with round key w[24,27]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V15, ($KEYP)
+ vaesdm.vs $V1, $V15 # with round key w[20,23]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V14, ($KEYP)
+ vaesdm.vs $V1, $V14 # with round key w[16,19]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V13, ($KEYP)
+ vaesdm.vs $V1, $V13 # with round key w[12,15]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V12, ($KEYP)
+ vaesdm.vs $V1, $V12 # with round key w[ 8,11]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V11, ($KEYP)
+ vaesdm.vs $V1, $V11 # with round key w[ 4, 7]
+ addi $KEYP, $KEYP, -16
+ vle32.v $V10, ($KEYP)
+ vaesdf.vs $V1, $V10 # with round key w[ 0, 3]
+
+ vse32.v $V1, ($OUTP)
+
+ ret
+.size L_dec_256,.-L_dec_256
+___
+}
+
+$code .= <<___;
+L_fail_m1:
+ li a0, -1
+ ret
+.size L_fail_m1,.-L_fail_m1
+
+L_fail_m2:
+ li a0, -2
+ ret
+.size L_fail_m2,.-L_fail_m2
+
+L_end:
+ ret
+.size L_end,.-L_end
+___
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
--
2.28.0
Port the vector-crypto accelerated CBC, CTR, ECB and XTS block modes for
AES cipher from OpenSSL(openssl/openssl#21923).
In addition, support XTS-AES-192 mode which is not existed in OpenSSL.
Co-developed-by: Phoebe Chen <[email protected]>
Signed-off-by: Phoebe Chen <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
Changelog v4:
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
- Revert the usage of simd skcipher.
- Get `walksize` from `crypto_skcipher_alg()`.
Changelog v3:
- Update extension checking conditions in riscv64_aes_block_mod_init().
- Add `riscv64` prefix for all setkey, encrypt and decrypt functions.
- Update xts_crypt() implementation.
Use the similar approach as x86's aes-xts implementation.
- Use asm mnemonics for the instructions in RVV 1.0 extension.
Changelog v2:
- Do not turn on kconfig `AES_BLOCK_RISCV64` option by default.
- Update asm function for using aes key in `crypto_aes_ctx` structure.
- Turn to use simd skcipher interface for AES-CBC/CTR/ECB/XTS modes.
We still have lots of discussions for kernel-vector implementation.
Before the final version of kernel-vector, use simd skcipher interface
to skip the fallback path for all aes modes in all kinds of contexts.
If we could always enable kernel-vector in softirq in the future, we
could make the original sync skcipher algorithm back.
- Refine aes-xts comments for head and tail blocks handling.
- Update VLEN constraint for aex-xts mode.
- Add `asmlinkage` qualifier for crypto asm function.
- Rename aes-riscv64-zvbb-zvkg-zvkned to aes-riscv64-zvkned-zvbb-zvkg.
- Rename aes-riscv64-zvkb-zvkned to aes-riscv64-zvkned-zvkb.
- Reorder structure riscv64_aes_algs_zvkned, riscv64_aes_alg_zvkned_zvkb
and riscv64_aes_alg_zvkned_zvbb_zvkg members initialization in the
order declared.
---
arch/riscv/crypto/Kconfig | 21 +
arch/riscv/crypto/Makefile | 11 +
.../crypto/aes-riscv64-block-mode-glue.c | 459 +++++++++
.../crypto/aes-riscv64-zvkned-zvbb-zvkg.pl | 949 ++++++++++++++++++
arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl | 415 ++++++++
arch/riscv/crypto/aes-riscv64-zvkned.pl | 746 ++++++++++++++
6 files changed, 2601 insertions(+)
create mode 100644 arch/riscv/crypto/aes-riscv64-block-mode-glue.c
create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl
create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl
diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
index 2a7c365f2a86..2cee0f68f0c7 100644
--- a/arch/riscv/crypto/Kconfig
+++ b/arch/riscv/crypto/Kconfig
@@ -13,4 +13,25 @@ config CRYPTO_AES_RISCV64
Architecture: riscv64 using:
- Zvkned vector crypto extension
+config CRYPTO_AES_BLOCK_RISCV64
+ tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_AES_RISCV64
+ select CRYPTO_SIMD
+ select CRYPTO_SKCIPHER
+ help
+ Length-preserving ciphers: AES cipher algorithms (FIPS-197)
+ with block cipher modes:
+ - ECB (Electronic Codebook) mode (NIST SP 800-38A)
+ - CBC (Cipher Block Chaining) mode (NIST SP 800-38A)
+ - CTR (Counter) mode (NIST SP 800-38A)
+ - XTS (XOR Encrypt XOR Tweakable Block Cipher with Ciphertext
+ Stealing) mode (NIST SP 800-38E and IEEE 1619)
+
+ Architecture: riscv64 using:
+ - Zvkned vector crypto extension
+ - Zvbb vector extension (XTS)
+ - Zvkb vector crypto extension (CTR/XTS)
+ - Zvkg vector crypto extension (XTS)
+
endmenu
diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile
index 90ca91d8df26..9574b009762f 100644
--- a/arch/riscv/crypto/Makefile
+++ b/arch/riscv/crypto/Makefile
@@ -6,10 +6,21 @@
obj-$(CONFIG_CRYPTO_AES_RISCV64) += aes-riscv64.o
aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o
+obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o
+aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o
+
quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) void $(@)
$(obj)/aes-riscv64-zvkned.S: $(src)/aes-riscv64-zvkned.pl
$(call cmd,perlasm)
+$(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl
+ $(call cmd,perlasm)
+
+$(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl
+ $(call cmd,perlasm)
+
clean-files += aes-riscv64-zvkned.S
+clean-files += aes-riscv64-zvkned-zvbb-zvkg.S
+clean-files += aes-riscv64-zvkned-zvkb.S
diff --git a/arch/riscv/crypto/aes-riscv64-block-mode-glue.c b/arch/riscv/crypto/aes-riscv64-block-mode-glue.c
new file mode 100644
index 000000000000..929c9948468a
--- /dev/null
+++ b/arch/riscv/crypto/aes-riscv64-block-mode-glue.c
@@ -0,0 +1,459 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Port of the OpenSSL AES block mode implementations for RISC-V
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Author: Jerry Shih <[email protected]>
+ */
+
+#include <asm/vector.h>
+#include <crypto/aes.h>
+#include <crypto/ctr.h>
+#include <crypto/xts.h>
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
+#include <linux/crypto.h>
+#include <linux/linkage.h>
+#include <linux/math.h>
+#include <linux/minmax.h>
+#include <linux/module.h>
+#include <linux/types.h>
+
+#include "aes-riscv64-glue.h"
+
+struct riscv64_aes_xts_ctx {
+ struct crypto_aes_ctx ctx1;
+ struct crypto_aes_ctx ctx2;
+};
+
+/* aes cbc block mode using zvkned vector crypto extension */
+asmlinkage void rv64i_zvkned_cbc_encrypt(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key,
+ u8 *ivec);
+asmlinkage void rv64i_zvkned_cbc_decrypt(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key,
+ u8 *ivec);
+/* aes ecb block mode using zvkned vector crypto extension */
+asmlinkage void rv64i_zvkned_ecb_encrypt(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key);
+asmlinkage void rv64i_zvkned_ecb_decrypt(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key);
+
+/* aes ctr block mode using zvkb and zvkned vector crypto extension */
+/* This func operates on 32-bit counter. Caller has to handle the overflow. */
+asmlinkage void
+rv64i_zvkb_zvkned_ctr32_encrypt_blocks(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key,
+ u8 *ivec);
+
+/* aes xts block mode using zvbb, zvkg and zvkned vector crypto extension */
+asmlinkage void
+rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key, u8 *iv,
+ int update_iv);
+asmlinkage void
+rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(const u8 *in, u8 *out, size_t length,
+ const struct crypto_aes_ctx *key, u8 *iv,
+ int update_iv);
+
+/* ecb */
+static int riscv64_aes_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
+{
+ struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ return riscv64_aes_setkey_zvkned(ctx, in_key, key_len);
+}
+
+static int riscv64_ecb_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ /* If we have error here, the `nbytes` will be zero. */
+ err = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes)) {
+ kernel_vector_begin();
+ rv64i_zvkned_ecb_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ nbytes & ~(AES_BLOCK_SIZE - 1), ctx);
+ kernel_vector_end();
+ err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+ }
+
+ return err;
+}
+
+static int riscv64_ecb_decrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes)) {
+ kernel_vector_begin();
+ rv64i_zvkned_ecb_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ nbytes & ~(AES_BLOCK_SIZE - 1), ctx);
+ kernel_vector_end();
+ err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+ }
+
+ return err;
+}
+
+/* cbc */
+static int riscv64_cbc_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes)) {
+ kernel_vector_begin();
+ rv64i_zvkned_cbc_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ nbytes & ~(AES_BLOCK_SIZE - 1), ctx,
+ walk.iv);
+ kernel_vector_end();
+ err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+ }
+
+ return err;
+}
+
+static int riscv64_cbc_decrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int nbytes;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes)) {
+ kernel_vector_begin();
+ rv64i_zvkned_cbc_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ nbytes & ~(AES_BLOCK_SIZE - 1), ctx,
+ walk.iv);
+ kernel_vector_end();
+ err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+ }
+
+ return err;
+}
+
+/* ctr */
+static int riscv64_ctr_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ unsigned int ctr32;
+ unsigned int nbytes;
+ unsigned int blocks;
+ unsigned int current_blocks;
+ unsigned int current_length;
+ int err;
+
+ /* the ctr iv uses big endian */
+ ctr32 = get_unaligned_be32(req->iv + 12);
+ err = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes)) {
+ if (nbytes != walk.total) {
+ nbytes &= ~(AES_BLOCK_SIZE - 1);
+ blocks = nbytes / AES_BLOCK_SIZE;
+ } else {
+ /* This is the last walk. We should handle the tail data. */
+ blocks = DIV_ROUND_UP(nbytes, AES_BLOCK_SIZE);
+ }
+ ctr32 += blocks;
+
+ kernel_vector_begin();
+ /*
+ * The `if` block below detects the overflow, which is then handled by
+ * limiting the amount of blocks to the exact overflow point.
+ */
+ if (ctr32 >= blocks) {
+ rv64i_zvkb_zvkned_ctr32_encrypt_blocks(
+ walk.src.virt.addr, walk.dst.virt.addr, nbytes,
+ ctx, req->iv);
+ } else {
+ /* use 2 ctr32 function calls for overflow case */
+ current_blocks = blocks - ctr32;
+ current_length =
+ min(nbytes, current_blocks * AES_BLOCK_SIZE);
+ rv64i_zvkb_zvkned_ctr32_encrypt_blocks(
+ walk.src.virt.addr, walk.dst.virt.addr,
+ current_length, ctx, req->iv);
+ crypto_inc(req->iv, 12);
+
+ if (ctr32) {
+ rv64i_zvkb_zvkned_ctr32_encrypt_blocks(
+ walk.src.virt.addr +
+ current_blocks * AES_BLOCK_SIZE,
+ walk.dst.virt.addr +
+ current_blocks * AES_BLOCK_SIZE,
+ nbytes - current_length, ctx, req->iv);
+ }
+ }
+ kernel_vector_end();
+
+ err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+
+ return err;
+}
+
+/* xts */
+static int riscv64_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
+{
+ struct riscv64_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ unsigned int xts_single_key_len = key_len / 2;
+ int ret;
+
+ ret = xts_verify_key(tfm, in_key, key_len);
+ if (ret)
+ return ret;
+ ret = riscv64_aes_setkey_zvkned(&ctx->ctx1, in_key, xts_single_key_len);
+ if (ret)
+ return ret;
+ return riscv64_aes_setkey_zvkned(
+ &ctx->ctx2, in_key + xts_single_key_len, xts_single_key_len);
+}
+
+static int xts_crypt(struct skcipher_request *req, bool encrypt)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct riscv64_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_request sub_req;
+ struct scatterlist sg_src[2], sg_dst[2];
+ struct scatterlist *src, *dst;
+ struct skcipher_walk walk;
+ unsigned int walk_size = crypto_skcipher_alg(tfm)->walksize;
+ unsigned int tail = req->cryptlen & (AES_BLOCK_SIZE - 1);
+ unsigned int nbytes;
+ unsigned int update_iv = 1;
+ int err;
+
+ /* xts input size should be bigger than AES_BLOCK_SIZE */
+ if (req->cryptlen < AES_BLOCK_SIZE)
+ return -EINVAL;
+
+ riscv64_aes_encrypt_zvkned(&ctx->ctx2, req->iv, req->iv);
+
+ if (unlikely(tail > 0 && req->cryptlen > walk_size)) {
+ /*
+ * Find the largest tail size which is small than `walk` size while the
+ * non-ciphertext-stealing parts still fit AES block boundary.
+ */
+ tail = walk_size + tail - AES_BLOCK_SIZE;
+
+ skcipher_request_set_tfm(&sub_req, tfm);
+ skcipher_request_set_callback(
+ &sub_req, skcipher_request_flags(req), NULL, NULL);
+ skcipher_request_set_crypt(&sub_req, req->src, req->dst,
+ req->cryptlen - tail, req->iv);
+ req = &sub_req;
+ } else {
+ tail = 0;
+ }
+
+ err = skcipher_walk_virt(&walk, req, false);
+ if (!walk.nbytes)
+ return err;
+
+ while ((nbytes = walk.nbytes)) {
+ if (nbytes < walk.total)
+ nbytes &= ~(AES_BLOCK_SIZE - 1);
+ else
+ update_iv = (tail > 0);
+
+ kernel_vector_begin();
+ if (encrypt)
+ rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(
+ walk.src.virt.addr, walk.dst.virt.addr, nbytes,
+ &ctx->ctx1, req->iv, update_iv);
+ else
+ rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(
+ walk.src.virt.addr, walk.dst.virt.addr, nbytes,
+ &ctx->ctx1, req->iv, update_iv);
+ kernel_vector_end();
+
+ err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+
+ if (unlikely(tail > 0 && !err)) {
+ dst = src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen);
+ if (req->dst != req->src)
+ dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen);
+
+ skcipher_request_set_crypt(req, src, dst, tail, req->iv);
+
+ err = skcipher_walk_virt(&walk, req, false);
+ if (err)
+ return err;
+
+ kernel_vector_begin();
+ if (encrypt)
+ rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(
+ walk.src.virt.addr, walk.dst.virt.addr,
+ walk.nbytes, &ctx->ctx1, req->iv, 0);
+ else
+ rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(
+ walk.src.virt.addr, walk.dst.virt.addr,
+ walk.nbytes, &ctx->ctx1, req->iv, 0);
+ kernel_vector_end();
+
+ err = skcipher_walk_done(&walk, 0);
+ }
+
+ return err;
+}
+
+static int riscv64_xts_encrypt(struct skcipher_request *req)
+{
+ return xts_crypt(req, true);
+}
+
+static int riscv64_xts_decrypt(struct skcipher_request *req)
+{
+ return xts_crypt(req, false);
+}
+
+static struct skcipher_alg riscv64_aes_algs_zvkned[] = {
+ {
+ .setkey = riscv64_aes_setkey,
+ .encrypt = riscv64_ecb_encrypt,
+ .decrypt = riscv64_ecb_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .walksize = AES_BLOCK_SIZE * 8,
+ .base = {
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crypto_aes_ctx),
+ .cra_priority = 300,
+ .cra_name = "ecb(aes)",
+ .cra_driver_name = "ecb-aes-riscv64-zvkned",
+ .cra_module = THIS_MODULE,
+ },
+ }, {
+ .setkey = riscv64_aes_setkey,
+ .encrypt = riscv64_cbc_encrypt,
+ .decrypt = riscv64_cbc_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .walksize = AES_BLOCK_SIZE * 8,
+ .base = {
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crypto_aes_ctx),
+ .cra_priority = 300,
+ .cra_name = "cbc(aes)",
+ .cra_driver_name = "cbc-aes-riscv64-zvkned",
+ .cra_module = THIS_MODULE,
+ },
+ }
+};
+
+static struct skcipher_alg riscv64_aes_alg_zvkned_zvkb = {
+ .setkey = riscv64_aes_setkey,
+ .encrypt = riscv64_ctr_encrypt,
+ .decrypt = riscv64_ctr_encrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .chunksize = AES_BLOCK_SIZE,
+ .walksize = AES_BLOCK_SIZE * 8,
+ .base = {
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct crypto_aes_ctx),
+ .cra_priority = 300,
+ .cra_name = "ctr(aes)",
+ .cra_driver_name = "ctr-aes-riscv64-zvkned-zvkb",
+ .cra_module = THIS_MODULE,
+ },
+};
+
+static struct skcipher_alg riscv64_aes_alg_zvkned_zvbb_zvkg = {
+ .setkey = riscv64_xts_setkey,
+ .encrypt = riscv64_xts_encrypt,
+ .decrypt = riscv64_xts_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ .chunksize = AES_BLOCK_SIZE,
+ .walksize = AES_BLOCK_SIZE * 8,
+ .base = {
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct riscv64_aes_xts_ctx),
+ .cra_priority = 300,
+ .cra_name = "xts(aes)",
+ .cra_driver_name = "xts-aes-riscv64-zvkned-zvbb-zvkg",
+ .cra_module = THIS_MODULE,
+ },
+};
+
+static int __init riscv64_aes_block_mod_init(void)
+{
+ int ret = -ENODEV;
+
+ if (riscv_isa_extension_available(NULL, ZVKNED) &&
+ riscv_vector_vlen() >= 128 && riscv_vector_vlen() <= 2048) {
+ ret = crypto_register_skciphers(
+ riscv64_aes_algs_zvkned,
+ ARRAY_SIZE(riscv64_aes_algs_zvkned));
+ if (ret)
+ return ret;
+
+ if (riscv_isa_extension_available(NULL, ZVKB)) {
+ ret = crypto_register_skcipher(&riscv64_aes_alg_zvkned_zvkb);
+ if (ret)
+ goto unregister_zvkned;
+ }
+
+ if (riscv_isa_extension_available(NULL, ZVBB) &&
+ riscv_isa_extension_available(NULL, ZVKG)) {
+ ret = crypto_register_skcipher(&riscv64_aes_alg_zvkned_zvbb_zvkg);
+ if (ret)
+ goto unregister_zvkned_zvkb;
+ }
+ }
+
+ return ret;
+
+unregister_zvkned_zvkb:
+ crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvkb);
+unregister_zvkned:
+ crypto_unregister_skciphers(riscv64_aes_algs_zvkned,
+ ARRAY_SIZE(riscv64_aes_algs_zvkned));
+
+ return ret;
+}
+
+static void __exit riscv64_aes_block_mod_fini(void)
+{
+ crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvbb_zvkg);
+ crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvkb);
+ crypto_unregister_skciphers(riscv64_aes_algs_zvkned,
+ ARRAY_SIZE(riscv64_aes_algs_zvkned));
+}
+
+module_init(riscv64_aes_block_mod_init);
+module_exit(riscv64_aes_block_mod_fini);
+
+MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS (RISC-V accelerated)");
+MODULE_AUTHOR("Jerry Shih <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("cbc(aes)");
+MODULE_ALIAS_CRYPTO("ctr(aes)");
+MODULE_ALIAS_CRYPTO("ecb(aes)");
+MODULE_ALIAS_CRYPTO("xts(aes)");
diff --git a/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl b/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl
new file mode 100644
index 000000000000..bc7772a5944a
--- /dev/null
+++ b/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl
@@ -0,0 +1,949 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Jerry Shih <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# - RV64I
+# - RISC-V Vector ('V') with VLEN >= 128 && VLEN <= 2048
+# - RISC-V Vector AES block cipher extension ('Zvkned')
+# - RISC-V Vector Bit-manipulation extension ('Zvbb')
+# - RISC-V Vector GCM/GMAC extension ('Zvkg')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+.text
+.option arch, +zvkned, +zvbb, +zvkg
+___
+
+{
+################################################################################
+# void rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(const unsigned char *in,
+# unsigned char *out, size_t length,
+# const AES_KEY *key,
+# unsigned char iv[16],
+# int update_iv)
+my ($INPUT, $OUTPUT, $LENGTH, $KEY, $IV, $UPDATE_IV) = ("a0", "a1", "a2", "a3", "a4", "a5");
+my ($TAIL_LENGTH) = ("a6");
+my ($VL) = ("a7");
+my ($T0, $T1, $T2, $T3) = ("t0", "t1", "t2", "t3");
+my ($STORE_LEN32) = ("t4");
+my ($LEN32) = ("t5");
+my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7,
+ $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15,
+ $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23,
+ $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31,
+) = map("v$_",(0..31));
+
+# load iv to v28
+sub load_xts_iv0 {
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V28, ($IV)
+___
+
+ return $code;
+}
+
+# prepare input data(v24), iv(v28), bit-reversed-iv(v16), bit-reversed-iv-multiplier(v20)
+sub init_first_round {
+ my $code=<<___;
+ # load input
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ vle32.v $V24, ($INPUT)
+
+ li $T0, 5
+ # We could simplify the initialization steps if we have `block<=1`.
+ blt $LEN32, $T0, 1f
+
+ # Note: We use `vgmul` for GF(2^128) multiplication. The `vgmul` uses
+ # different order of coefficients. We should use`vbrev8` to reverse the
+ # data when we use `vgmul`.
+ vsetivli zero, 4, e32, m1, ta, ma
+ vbrev8.v $V0, $V28
+ vsetvli zero, $LEN32, e32, m4, ta, ma
+ vmv.v.i $V16, 0
+ # v16: [r-IV0, r-IV0, ...]
+ vaesz.vs $V16, $V0
+
+ # Prepare GF(2^128) multiplier [1, x, x^2, x^3, ...] in v8.
+ # We use `vwsll` to get power of 2 multipliers. Current rvv spec only
+ # supports `SEW<=64`. So, the maximum `VLEN` for this approach is `2048`.
+ # SEW64_BITS * AES_BLOCK_SIZE / LMUL
+ # = 64 * 128 / 4 = 2048
+ #
+ # TODO: truncate the vl to `2048` for `vlen>2048` case.
+ slli $T0, $LEN32, 2
+ vsetvli zero, $T0, e32, m1, ta, ma
+ # v2: [`1`, `1`, `1`, `1`, ...]
+ vmv.v.i $V2, 1
+ # v3: [`0`, `1`, `2`, `3`, ...]
+ vid.v $V3
+ vsetvli zero, $T0, e64, m2, ta, ma
+ # v4: [`1`, 0, `1`, 0, `1`, 0, `1`, 0, ...]
+ vzext.vf2 $V4, $V2
+ # v6: [`0`, 0, `1`, 0, `2`, 0, `3`, 0, ...]
+ vzext.vf2 $V6, $V3
+ slli $T0, $LEN32, 1
+ vsetvli zero, $T0, e32, m2, ta, ma
+ # v8: [1<<0=1, 0, 0, 0, 1<<1=x, 0, 0, 0, 1<<2=x^2, 0, 0, 0, ...]
+ vwsll.vv $V8, $V4, $V6
+
+ # Compute [r-IV0*1, r-IV0*x, r-IV0*x^2, r-IV0*x^3, ...] in v16
+ vsetvli zero, $LEN32, e32, m4, ta, ma
+ vbrev8.v $V8, $V8
+ vgmul.vv $V16, $V8
+
+ # Compute [IV0*1, IV0*x, IV0*x^2, IV0*x^3, ...] in v28.
+ # Reverse the bits order back.
+ vbrev8.v $V28, $V16
+
+ # Prepare the x^n multiplier in v20. The `n` is the aes-xts block number
+ # in a LMUL=4 register group.
+ # n = ((VLEN*LMUL)/(32*4)) = ((VLEN*4)/(32*4))
+ # = (VLEN/32)
+ # We could use vsetvli with `e32, m1` to compute the `n` number.
+ vsetvli $T0, zero, e32, m1, ta, ma
+ li $T1, 1
+ sll $T0, $T1, $T0
+ vsetivli zero, 2, e64, m1, ta, ma
+ vmv.v.i $V0, 0
+ vsetivli zero, 1, e64, m1, tu, ma
+ vmv.v.x $V0, $T0
+ vsetivli zero, 2, e64, m1, ta, ma
+ vbrev8.v $V0, $V0
+ vsetvli zero, $LEN32, e32, m4, ta, ma
+ vmv.v.i $V20, 0
+ vaesz.vs $V20, $V0
+
+ j 2f
+1:
+ vsetivli zero, 4, e32, m1, ta, ma
+ vbrev8.v $V16, $V28
+2:
+___
+
+ return $code;
+}
+
+# prepare xts enc last block's input(v24) and iv(v28)
+sub handle_xts_enc_last_block {
+ my $code=<<___;
+ bnez $TAIL_LENGTH, 2f
+
+ beqz $UPDATE_IV, 1f
+ ## Store next IV
+ addi $VL, $VL, -4
+ vsetivli zero, 4, e32, m4, ta, ma
+ # multiplier
+ vslidedown.vx $V16, $V16, $VL
+
+ # setup `x` multiplier with byte-reversed order
+ # 0b00000010 => 0b01000000 (0x40)
+ li $T0, 0x40
+ vsetivli zero, 4, e32, m1, ta, ma
+ vmv.v.i $V28, 0
+ vsetivli zero, 1, e8, m1, tu, ma
+ vmv.v.x $V28, $T0
+
+ # IV * `x`
+ vsetivli zero, 4, e32, m1, ta, ma
+ vgmul.vv $V16, $V28
+ # Reverse the IV's bits order back to big-endian
+ vbrev8.v $V28, $V16
+
+ vse32.v $V28, ($IV)
+1:
+
+ ret
+2:
+ # slidedown second to last block
+ addi $VL, $VL, -4
+ vsetivli zero, 4, e32, m4, ta, ma
+ # ciphertext
+ vslidedown.vx $V24, $V24, $VL
+ # multiplier
+ vslidedown.vx $V16, $V16, $VL
+
+ vsetivli zero, 4, e32, m1, ta, ma
+ vmv.v.v $V25, $V24
+
+ # load last block into v24
+ # note: We should load the last block before store the second to last block
+ # for in-place operation.
+ vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma
+ vle8.v $V24, ($INPUT)
+
+ # setup `x` multiplier with byte-reversed order
+ # 0b00000010 => 0b01000000 (0x40)
+ li $T0, 0x40
+ vsetivli zero, 4, e32, m1, ta, ma
+ vmv.v.i $V28, 0
+ vsetivli zero, 1, e8, m1, tu, ma
+ vmv.v.x $V28, $T0
+
+ # compute IV for last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vgmul.vv $V16, $V28
+ vbrev8.v $V28, $V16
+
+ # store second to last block
+ vsetvli zero, $TAIL_LENGTH, e8, m1, ta, ma
+ vse8.v $V25, ($OUTPUT)
+___
+
+ return $code;
+}
+
+# prepare xts dec second to last block's input(v24) and iv(v29) and
+# last block's and iv(v28)
+sub handle_xts_dec_last_block {
+ my $code=<<___;
+ bnez $TAIL_LENGTH, 2f
+
+ beqz $UPDATE_IV, 1f
+ ## Store next IV
+ # setup `x` multiplier with byte-reversed order
+ # 0b00000010 => 0b01000000 (0x40)
+ li $T0, 0x40
+ vsetivli zero, 4, e32, m1, ta, ma
+ vmv.v.i $V28, 0
+ vsetivli zero, 1, e8, m1, tu, ma
+ vmv.v.x $V28, $T0
+
+ beqz $LENGTH, 3f
+ addi $VL, $VL, -4
+ vsetivli zero, 4, e32, m4, ta, ma
+ # multiplier
+ vslidedown.vx $V16, $V16, $VL
+
+3:
+ # IV * `x`
+ vsetivli zero, 4, e32, m1, ta, ma
+ vgmul.vv $V16, $V28
+ # Reverse the IV's bits order back to big-endian
+ vbrev8.v $V28, $V16
+
+ vse32.v $V28, ($IV)
+1:
+
+ ret
+2:
+ # load second to last block's ciphertext
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V24, ($INPUT)
+ addi $INPUT, $INPUT, 16
+
+ # setup `x` multiplier with byte-reversed order
+ # 0b00000010 => 0b01000000 (0x40)
+ li $T0, 0x40
+ vsetivli zero, 4, e32, m1, ta, ma
+ vmv.v.i $V20, 0
+ vsetivli zero, 1, e8, m1, tu, ma
+ vmv.v.x $V20, $T0
+
+ beqz $LENGTH, 1f
+ # slidedown third to last block
+ addi $VL, $VL, -4
+ vsetivli zero, 4, e32, m4, ta, ma
+ # multiplier
+ vslidedown.vx $V16, $V16, $VL
+
+ # compute IV for last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vgmul.vv $V16, $V20
+ vbrev8.v $V28, $V16
+
+ # compute IV for second to last block
+ vgmul.vv $V16, $V20
+ vbrev8.v $V29, $V16
+ j 2f
+1:
+ # compute IV for second to last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vgmul.vv $V16, $V20
+ vbrev8.v $V29, $V16
+2:
+___
+
+ return $code;
+}
+
+# Load all 11 round keys to v1-v11 registers.
+sub aes_128_load_key {
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V2, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V3, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V4, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V5, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V6, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V7, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V8, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V9, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V10, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V11, ($KEY)
+___
+
+ return $code;
+}
+
+# Load all 13 round keys to v1-v13 registers.
+sub aes_192_load_key {
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V2, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V3, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V4, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V5, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V6, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V7, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V8, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V9, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V10, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V11, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V12, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V13, ($KEY)
+___
+
+ return $code;
+}
+
+# Load all 15 round keys to v1-v15 registers.
+sub aes_256_load_key {
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V2, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V3, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V4, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V5, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V6, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V7, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V8, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V9, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V10, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V11, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V12, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V13, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V14, ($KEY)
+ addi $KEY, $KEY, 16
+ vle32.v $V15, ($KEY)
+___
+
+ return $code;
+}
+
+# aes-128 enc with round keys v1-v11
+sub aes_128_enc {
+ my $code=<<___;
+ vaesz.vs $V24, $V1
+ vaesem.vs $V24, $V2
+ vaesem.vs $V24, $V3
+ vaesem.vs $V24, $V4
+ vaesem.vs $V24, $V5
+ vaesem.vs $V24, $V6
+ vaesem.vs $V24, $V7
+ vaesem.vs $V24, $V8
+ vaesem.vs $V24, $V9
+ vaesem.vs $V24, $V10
+ vaesef.vs $V24, $V11
+___
+
+ return $code;
+}
+
+# aes-128 dec with round keys v1-v11
+sub aes_128_dec {
+ my $code=<<___;
+ vaesz.vs $V24, $V11
+ vaesdm.vs $V24, $V10
+ vaesdm.vs $V24, $V9
+ vaesdm.vs $V24, $V8
+ vaesdm.vs $V24, $V7
+ vaesdm.vs $V24, $V6
+ vaesdm.vs $V24, $V5
+ vaesdm.vs $V24, $V4
+ vaesdm.vs $V24, $V3
+ vaesdm.vs $V24, $V2
+ vaesdf.vs $V24, $V1
+___
+
+ return $code;
+}
+
+# aes-192 enc with round keys v1-v13
+sub aes_192_enc {
+ my $code=<<___;
+ vaesz.vs $V24, $V1
+ vaesem.vs $V24, $V2
+ vaesem.vs $V24, $V3
+ vaesem.vs $V24, $V4
+ vaesem.vs $V24, $V5
+ vaesem.vs $V24, $V6
+ vaesem.vs $V24, $V7
+ vaesem.vs $V24, $V8
+ vaesem.vs $V24, $V9
+ vaesem.vs $V24, $V10
+ vaesem.vs $V24, $V11
+ vaesem.vs $V24, $V12
+ vaesef.vs $V24, $V13
+___
+
+ return $code;
+}
+
+# aes-192 dec with round keys v1-v13
+sub aes_192_dec {
+ my $code=<<___;
+ vaesz.vs $V24, $V13
+ vaesdm.vs $V24, $V12
+ vaesdm.vs $V24, $V11
+ vaesdm.vs $V24, $V10
+ vaesdm.vs $V24, $V9
+ vaesdm.vs $V24, $V8
+ vaesdm.vs $V24, $V7
+ vaesdm.vs $V24, $V6
+ vaesdm.vs $V24, $V5
+ vaesdm.vs $V24, $V4
+ vaesdm.vs $V24, $V3
+ vaesdm.vs $V24, $V2
+ vaesdf.vs $V24, $V1
+___
+
+ return $code;
+}
+
+# aes-256 enc with round keys v1-v15
+sub aes_256_enc {
+ my $code=<<___;
+ vaesz.vs $V24, $V1
+ vaesem.vs $V24, $V2
+ vaesem.vs $V24, $V3
+ vaesem.vs $V24, $V4
+ vaesem.vs $V24, $V5
+ vaesem.vs $V24, $V6
+ vaesem.vs $V24, $V7
+ vaesem.vs $V24, $V8
+ vaesem.vs $V24, $V9
+ vaesem.vs $V24, $V10
+ vaesem.vs $V24, $V11
+ vaesem.vs $V24, $V12
+ vaesem.vs $V24, $V13
+ vaesem.vs $V24, $V14
+ vaesef.vs $V24, $V15
+___
+
+ return $code;
+}
+
+# aes-256 dec with round keys v1-v15
+sub aes_256_dec {
+ my $code=<<___;
+ vaesz.vs $V24, $V15
+ vaesdm.vs $V24, $V14
+ vaesdm.vs $V24, $V13
+ vaesdm.vs $V24, $V12
+ vaesdm.vs $V24, $V11
+ vaesdm.vs $V24, $V10
+ vaesdm.vs $V24, $V9
+ vaesdm.vs $V24, $V8
+ vaesdm.vs $V24, $V7
+ vaesdm.vs $V24, $V6
+ vaesdm.vs $V24, $V5
+ vaesdm.vs $V24, $V4
+ vaesdm.vs $V24, $V3
+ vaesdm.vs $V24, $V2
+ vaesdf.vs $V24, $V1
+___
+
+ return $code;
+}
+
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt
+.type rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt,\@function
+rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt:
+ @{[load_xts_iv0]}
+
+ # aes block size is 16
+ andi $TAIL_LENGTH, $LENGTH, 15
+ mv $STORE_LEN32, $LENGTH
+ beqz $TAIL_LENGTH, 1f
+ sub $LENGTH, $LENGTH, $TAIL_LENGTH
+ addi $STORE_LEN32, $LENGTH, -16
+1:
+ # We make the `LENGTH` become e32 length here.
+ srli $LEN32, $LENGTH, 2
+ srli $STORE_LEN32, $STORE_LEN32, 2
+
+ # Load key length.
+ lwu $T0, 480($KEY)
+ li $T1, 32
+ li $T2, 24
+ li $T3, 16
+ beq $T0, $T1, aes_xts_enc_256
+ beq $T0, $T2, aes_xts_enc_192
+ beq $T0, $T3, aes_xts_enc_128
+.size rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt,.-rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt
+___
+
+$code .= <<___;
+.p2align 3
+aes_xts_enc_128:
+ @{[init_first_round]}
+ @{[aes_128_load_key]}
+
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ j 1f
+
+.Lenc_blocks_128:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ # load plaintext into v24
+ vle32.v $V24, ($INPUT)
+ # update iv
+ vgmul.vv $V16, $V20
+ # reverse the iv's bits order back
+ vbrev8.v $V28, $V16
+1:
+ vxor.vv $V24, $V24, $V28
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+ add $INPUT, $INPUT, $T0
+ @{[aes_128_enc]}
+ vxor.vv $V24, $V24, $V28
+
+ # store ciphertext
+ vsetvli zero, $STORE_LEN32, e32, m4, ta, ma
+ vse32.v $V24, ($OUTPUT)
+ add $OUTPUT, $OUTPUT, $T0
+ sub $STORE_LEN32, $STORE_LEN32, $VL
+
+ bnez $LEN32, .Lenc_blocks_128
+
+ @{[handle_xts_enc_last_block]}
+
+ # xts last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V28
+ @{[aes_128_enc]}
+ vxor.vv $V24, $V24, $V28
+
+ # store last block ciphertext
+ addi $OUTPUT, $OUTPUT, -16
+ vse32.v $V24, ($OUTPUT)
+
+ ret
+.size aes_xts_enc_128,.-aes_xts_enc_128
+___
+
+$code .= <<___;
+.p2align 3
+aes_xts_enc_192:
+ @{[init_first_round]}
+ @{[aes_192_load_key]}
+
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ j 1f
+
+.Lenc_blocks_192:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ # load plaintext into v24
+ vle32.v $V24, ($INPUT)
+ # update iv
+ vgmul.vv $V16, $V20
+ # reverse the iv's bits order back
+ vbrev8.v $V28, $V16
+1:
+ vxor.vv $V24, $V24, $V28
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+ add $INPUT, $INPUT, $T0
+ @{[aes_192_enc]}
+ vxor.vv $V24, $V24, $V28
+
+ # store ciphertext
+ vsetvli zero, $STORE_LEN32, e32, m4, ta, ma
+ vse32.v $V24, ($OUTPUT)
+ add $OUTPUT, $OUTPUT, $T0
+ sub $STORE_LEN32, $STORE_LEN32, $VL
+
+ bnez $LEN32, .Lenc_blocks_192
+
+ @{[handle_xts_enc_last_block]}
+
+ # xts last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V28
+ @{[aes_192_enc]}
+ vxor.vv $V24, $V24, $V28
+
+ # store last block ciphertext
+ addi $OUTPUT, $OUTPUT, -16
+ vse32.v $V24, ($OUTPUT)
+
+ ret
+.size aes_xts_enc_192,.-aes_xts_enc_192
+___
+
+$code .= <<___;
+.p2align 3
+aes_xts_enc_256:
+ @{[init_first_round]}
+ @{[aes_256_load_key]}
+
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ j 1f
+
+.Lenc_blocks_256:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ # load plaintext into v24
+ vle32.v $V24, ($INPUT)
+ # update iv
+ vgmul.vv $V16, $V20
+ # reverse the iv's bits order back
+ vbrev8.v $V28, $V16
+1:
+ vxor.vv $V24, $V24, $V28
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+ add $INPUT, $INPUT, $T0
+ @{[aes_256_enc]}
+ vxor.vv $V24, $V24, $V28
+
+ # store ciphertext
+ vsetvli zero, $STORE_LEN32, e32, m4, ta, ma
+ vse32.v $V24, ($OUTPUT)
+ add $OUTPUT, $OUTPUT, $T0
+ sub $STORE_LEN32, $STORE_LEN32, $VL
+
+ bnez $LEN32, .Lenc_blocks_256
+
+ @{[handle_xts_enc_last_block]}
+
+ # xts last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V28
+ @{[aes_256_enc]}
+ vxor.vv $V24, $V24, $V28
+
+ # store last block ciphertext
+ addi $OUTPUT, $OUTPUT, -16
+ vse32.v $V24, ($OUTPUT)
+
+ ret
+.size aes_xts_enc_256,.-aes_xts_enc_256
+___
+
+################################################################################
+# void rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(const unsigned char *in,
+# unsigned char *out, size_t length,
+# const AES_KEY *key,
+# unsigned char iv[16],
+# int update_iv)
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt
+.type rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt,\@function
+rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt:
+ @{[load_xts_iv0]}
+
+ # aes block size is 16
+ andi $TAIL_LENGTH, $LENGTH, 15
+ beqz $TAIL_LENGTH, 1f
+ sub $LENGTH, $LENGTH, $TAIL_LENGTH
+ addi $LENGTH, $LENGTH, -16
+1:
+ # We make the `LENGTH` become e32 length here.
+ srli $LEN32, $LENGTH, 2
+
+ # Load key length.
+ lwu $T0, 480($KEY)
+ li $T1, 32
+ li $T2, 24
+ li $T3, 16
+ beq $T0, $T1, aes_xts_dec_256
+ beq $T0, $T2, aes_xts_dec_192
+ beq $T0, $T3, aes_xts_dec_128
+.size rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt,.-rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt
+___
+
+$code .= <<___;
+.p2align 3
+aes_xts_dec_128:
+ @{[init_first_round]}
+ @{[aes_128_load_key]}
+
+ beqz $LEN32, 2f
+
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ j 1f
+
+.Ldec_blocks_128:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ # load ciphertext into v24
+ vle32.v $V24, ($INPUT)
+ # update iv
+ vgmul.vv $V16, $V20
+ # reverse the iv's bits order back
+ vbrev8.v $V28, $V16
+1:
+ vxor.vv $V24, $V24, $V28
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+ add $INPUT, $INPUT, $T0
+ @{[aes_128_dec]}
+ vxor.vv $V24, $V24, $V28
+
+ # store plaintext
+ vse32.v $V24, ($OUTPUT)
+ add $OUTPUT, $OUTPUT, $T0
+
+ bnez $LEN32, .Ldec_blocks_128
+
+2:
+ @{[handle_xts_dec_last_block]}
+
+ ## xts second to last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V29
+ @{[aes_128_dec]}
+ vxor.vv $V24, $V24, $V29
+ vmv.v.v $V25, $V24
+
+ # load last block ciphertext
+ vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma
+ vle8.v $V24, ($INPUT)
+
+ # store second to last block plaintext
+ addi $T0, $OUTPUT, 16
+ vse8.v $V25, ($T0)
+
+ ## xts last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V28
+ @{[aes_128_dec]}
+ vxor.vv $V24, $V24, $V28
+
+ # store second to last block plaintext
+ vse32.v $V24, ($OUTPUT)
+
+ ret
+.size aes_xts_dec_128,.-aes_xts_dec_128
+___
+
+$code .= <<___;
+.p2align 3
+aes_xts_dec_192:
+ @{[init_first_round]}
+ @{[aes_192_load_key]}
+
+ beqz $LEN32, 2f
+
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ j 1f
+
+.Ldec_blocks_192:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ # load ciphertext into v24
+ vle32.v $V24, ($INPUT)
+ # update iv
+ vgmul.vv $V16, $V20
+ # reverse the iv's bits order back
+ vbrev8.v $V28, $V16
+1:
+ vxor.vv $V24, $V24, $V28
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+ add $INPUT, $INPUT, $T0
+ @{[aes_192_dec]}
+ vxor.vv $V24, $V24, $V28
+
+ # store plaintext
+ vse32.v $V24, ($OUTPUT)
+ add $OUTPUT, $OUTPUT, $T0
+
+ bnez $LEN32, .Ldec_blocks_192
+
+2:
+ @{[handle_xts_dec_last_block]}
+
+ ## xts second to last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V29
+ @{[aes_192_dec]}
+ vxor.vv $V24, $V24, $V29
+ vmv.v.v $V25, $V24
+
+ # load last block ciphertext
+ vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma
+ vle8.v $V24, ($INPUT)
+
+ # store second to last block plaintext
+ addi $T0, $OUTPUT, 16
+ vse8.v $V25, ($T0)
+
+ ## xts last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V28
+ @{[aes_192_dec]}
+ vxor.vv $V24, $V24, $V28
+
+ # store second to last block plaintext
+ vse32.v $V24, ($OUTPUT)
+
+ ret
+.size aes_xts_dec_192,.-aes_xts_dec_192
+___
+
+$code .= <<___;
+.p2align 3
+aes_xts_dec_256:
+ @{[init_first_round]}
+ @{[aes_256_load_key]}
+
+ beqz $LEN32, 2f
+
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ j 1f
+
+.Ldec_blocks_256:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ # load ciphertext into v24
+ vle32.v $V24, ($INPUT)
+ # update iv
+ vgmul.vv $V16, $V20
+ # reverse the iv's bits order back
+ vbrev8.v $V28, $V16
+1:
+ vxor.vv $V24, $V24, $V28
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+ add $INPUT, $INPUT, $T0
+ @{[aes_256_dec]}
+ vxor.vv $V24, $V24, $V28
+
+ # store plaintext
+ vse32.v $V24, ($OUTPUT)
+ add $OUTPUT, $OUTPUT, $T0
+
+ bnez $LEN32, .Ldec_blocks_256
+
+2:
+ @{[handle_xts_dec_last_block]}
+
+ ## xts second to last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V29
+ @{[aes_256_dec]}
+ vxor.vv $V24, $V24, $V29
+ vmv.v.v $V25, $V24
+
+ # load last block ciphertext
+ vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma
+ vle8.v $V24, ($INPUT)
+
+ # store second to last block plaintext
+ addi $T0, $OUTPUT, 16
+ vse8.v $V25, ($T0)
+
+ ## xts last block
+ vsetivli zero, 4, e32, m1, ta, ma
+ vxor.vv $V24, $V24, $V28
+ @{[aes_256_dec]}
+ vxor.vv $V24, $V24, $V28
+
+ # store second to last block plaintext
+ vse32.v $V24, ($OUTPUT)
+
+ ret
+.size aes_xts_dec_256,.-aes_xts_dec_256
+___
+}
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
diff --git a/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl b/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl
new file mode 100644
index 000000000000..39ce998039a2
--- /dev/null
+++ b/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl
@@ -0,0 +1,415 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Jerry Shih <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# - RV64I
+# - RISC-V Vector ('V') with VLEN >= 128
+# - RISC-V Vector AES block cipher extension ('Zvkned')
+# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+.text
+.option arch, +zvkned, +zvkb
+___
+
+################################################################################
+# void rv64i_zvkb_zvkned_ctr32_encrypt_blocks(const unsigned char *in,
+# unsigned char *out, size_t length,
+# const void *key,
+# unsigned char ivec[16]);
+{
+my ($INP, $OUTP, $LEN, $KEYP, $IVP) = ("a0", "a1", "a2", "a3", "a4");
+my ($T0, $T1, $T2, $T3) = ("t0", "t1", "t2", "t3");
+my ($VL) = ("t4");
+my ($LEN32) = ("t5");
+my ($CTR) = ("t6");
+my ($MASK) = ("v0");
+my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7,
+ $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15,
+ $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23,
+ $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31,
+) = map("v$_",(0..31));
+
+# Prepare the AES ctr input data into v16.
+sub init_aes_ctr_input {
+ my $code=<<___;
+ # Setup mask into v0
+ # The mask pattern for 4*N-th elements
+ # mask v0: [000100010001....]
+ # Note:
+ # We could setup the mask just for the maximum element length instead of
+ # the VLMAX.
+ li $T0, 0b10001000
+ vsetvli $T2, zero, e8, m1, ta, ma
+ vmv.v.x $MASK, $T0
+ # Load IV.
+ # v31:[IV0, IV1, IV2, big-endian count]
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V31, ($IVP)
+ # Convert the big-endian counter into little-endian.
+ vsetivli zero, 4, e32, m1, ta, mu
+ vrev8.v $V31, $V31, $MASK.t
+ # Splat the IV to v16
+ vsetvli zero, $LEN32, e32, m4, ta, ma
+ vmv.v.i $V16, 0
+ vaesz.vs $V16, $V31
+ # Prepare the ctr pattern into v20
+ # v20: [x, x, x, 0, x, x, x, 1, x, x, x, 2, ...]
+ viota.m $V20, $MASK, $MASK.t
+ # v16:[IV0, IV1, IV2, count+0, IV0, IV1, IV2, count+1, ...]
+ vsetvli $VL, $LEN32, e32, m4, ta, mu
+ vadd.vv $V16, $V16, $V20, $MASK.t
+___
+
+ return $code;
+}
+
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkb_zvkned_ctr32_encrypt_blocks
+.type rv64i_zvkb_zvkned_ctr32_encrypt_blocks,\@function
+rv64i_zvkb_zvkned_ctr32_encrypt_blocks:
+ # The aes block size is 16 bytes.
+ # We try to get the minimum aes block number including the tail data.
+ addi $T0, $LEN, 15
+ # the minimum block number
+ srli $T0, $T0, 4
+ # We make the block number become e32 length here.
+ slli $LEN32, $T0, 2
+
+ # Load key length.
+ lwu $T0, 480($KEYP)
+ li $T1, 32
+ li $T2, 24
+ li $T3, 16
+
+ beq $T0, $T1, ctr32_encrypt_blocks_256
+ beq $T0, $T2, ctr32_encrypt_blocks_192
+ beq $T0, $T3, ctr32_encrypt_blocks_128
+
+ ret
+.size rv64i_zvkb_zvkned_ctr32_encrypt_blocks,.-rv64i_zvkb_zvkned_ctr32_encrypt_blocks
+___
+
+$code .= <<___;
+.p2align 3
+ctr32_encrypt_blocks_128:
+ # Load all 11 round keys to v1-v11 registers.
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V2, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V3, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V4, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V5, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V6, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V7, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V8, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V9, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V10, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+
+ @{[init_aes_ctr_input]}
+
+ ##### AES body
+ j 2f
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, mu
+ # Increase ctr in v16.
+ vadd.vx $V16, $V16, $CTR, $MASK.t
+2:
+ # Prepare the AES ctr input into v24.
+ # The ctr data uses big-endian form.
+ vmv.v.v $V24, $V16
+ vrev8.v $V24, $V24, $MASK.t
+ srli $CTR, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ # Load plaintext in bytes into v20.
+ vsetvli $T0, $LEN, e8, m4, ta, ma
+ vle8.v $V20, ($INP)
+ sub $LEN, $LEN, $T0
+ add $INP, $INP, $T0
+
+ vsetvli zero, $VL, e32, m4, ta, ma
+ vaesz.vs $V24, $V1
+ vaesem.vs $V24, $V2
+ vaesem.vs $V24, $V3
+ vaesem.vs $V24, $V4
+ vaesem.vs $V24, $V5
+ vaesem.vs $V24, $V6
+ vaesem.vs $V24, $V7
+ vaesem.vs $V24, $V8
+ vaesem.vs $V24, $V9
+ vaesem.vs $V24, $V10
+ vaesef.vs $V24, $V11
+
+ # ciphertext
+ vsetvli zero, $T0, e8, m4, ta, ma
+ vxor.vv $V24, $V24, $V20
+
+ # Store the ciphertext.
+ vse8.v $V24, ($OUTP)
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN, 1b
+
+ ## store ctr iv
+ vsetivli zero, 4, e32, m1, ta, mu
+ # Increase ctr in v16.
+ vadd.vx $V16, $V16, $CTR, $MASK.t
+ # Convert ctr data back to big-endian.
+ vrev8.v $V16, $V16, $MASK.t
+ vse32.v $V16, ($IVP)
+
+ ret
+.size ctr32_encrypt_blocks_128,.-ctr32_encrypt_blocks_128
+___
+
+$code .= <<___;
+.p2align 3
+ctr32_encrypt_blocks_192:
+ # Load all 13 round keys to v1-v13 registers.
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V2, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V3, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V4, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V5, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V6, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V7, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V8, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V9, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V10, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+
+ @{[init_aes_ctr_input]}
+
+ ##### AES body
+ j 2f
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, mu
+ # Increase ctr in v16.
+ vadd.vx $V16, $V16, $CTR, $MASK.t
+2:
+ # Prepare the AES ctr input into v24.
+ # The ctr data uses big-endian form.
+ vmv.v.v $V24, $V16
+ vrev8.v $V24, $V24, $MASK.t
+ srli $CTR, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ # Load plaintext in bytes into v20.
+ vsetvli $T0, $LEN, e8, m4, ta, ma
+ vle8.v $V20, ($INP)
+ sub $LEN, $LEN, $T0
+ add $INP, $INP, $T0
+
+ vsetvli zero, $VL, e32, m4, ta, ma
+ vaesz.vs $V24, $V1
+ vaesem.vs $V24, $V2
+ vaesem.vs $V24, $V3
+ vaesem.vs $V24, $V4
+ vaesem.vs $V24, $V5
+ vaesem.vs $V24, $V6
+ vaesem.vs $V24, $V7
+ vaesem.vs $V24, $V8
+ vaesem.vs $V24, $V9
+ vaesem.vs $V24, $V10
+ vaesem.vs $V24, $V11
+ vaesem.vs $V24, $V12
+ vaesef.vs $V24, $V13
+
+ # ciphertext
+ vsetvli zero, $T0, e8, m4, ta, ma
+ vxor.vv $V24, $V24, $V20
+
+ # Store the ciphertext.
+ vse8.v $V24, ($OUTP)
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN, 1b
+
+ ## store ctr iv
+ vsetivli zero, 4, e32, m1, ta, mu
+ # Increase ctr in v16.
+ vadd.vx $V16, $V16, $CTR, $MASK.t
+ # Convert ctr data back to big-endian.
+ vrev8.v $V16, $V16, $MASK.t
+ vse32.v $V16, ($IVP)
+
+ ret
+.size ctr32_encrypt_blocks_192,.-ctr32_encrypt_blocks_192
+___
+
+$code .= <<___;
+.p2align 3
+ctr32_encrypt_blocks_256:
+ # Load all 15 round keys to v1-v15 registers.
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V2, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V3, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V4, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V5, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V6, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V7, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V8, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V9, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V10, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V14, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V15, ($KEYP)
+
+ @{[init_aes_ctr_input]}
+
+ ##### AES body
+ j 2f
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, mu
+ # Increase ctr in v16.
+ vadd.vx $V16, $V16, $CTR, $MASK.t
+2:
+ # Prepare the AES ctr input into v24.
+ # The ctr data uses big-endian form.
+ vmv.v.v $V24, $V16
+ vrev8.v $V24, $V24, $MASK.t
+ srli $CTR, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ # Load plaintext in bytes into v20.
+ vsetvli $T0, $LEN, e8, m4, ta, ma
+ vle8.v $V20, ($INP)
+ sub $LEN, $LEN, $T0
+ add $INP, $INP, $T0
+
+ vsetvli zero, $VL, e32, m4, ta, ma
+ vaesz.vs $V24, $V1
+ vaesem.vs $V24, $V2
+ vaesem.vs $V24, $V3
+ vaesem.vs $V24, $V4
+ vaesem.vs $V24, $V5
+ vaesem.vs $V24, $V6
+ vaesem.vs $V24, $V7
+ vaesem.vs $V24, $V8
+ vaesem.vs $V24, $V9
+ vaesem.vs $V24, $V10
+ vaesem.vs $V24, $V11
+ vaesem.vs $V24, $V12
+ vaesem.vs $V24, $V13
+ vaesem.vs $V24, $V14
+ vaesef.vs $V24, $V15
+
+ # ciphertext
+ vsetvli zero, $T0, e8, m4, ta, ma
+ vxor.vv $V24, $V24, $V20
+
+ # Store the ciphertext.
+ vse8.v $V24, ($OUTP)
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN, 1b
+
+ ## store ctr iv
+ vsetivli zero, 4, e32, m1, ta, mu
+ # Increase ctr in v16.
+ vadd.vx $V16, $V16, $CTR, $MASK.t
+ # Convert ctr data back to big-endian.
+ vrev8.v $V16, $V16, $MASK.t
+ vse32.v $V16, ($IVP)
+
+ ret
+.size ctr32_encrypt_blocks_256,.-ctr32_encrypt_blocks_256
+___
+}
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
diff --git a/arch/riscv/crypto/aes-riscv64-zvkned.pl b/arch/riscv/crypto/aes-riscv64-zvkned.pl
index 583e87912e5d..383d5fee4ff2 100644
--- a/arch/riscv/crypto/aes-riscv64-zvkned.pl
+++ b/arch/riscv/crypto/aes-riscv64-zvkned.pl
@@ -67,6 +67,752 @@ my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7,
$V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31,
) = map("v$_",(0..31));
+# Load all 11 round keys to v1-v11 registers.
+sub aes_128_load_key {
+ my $KEYP = shift;
+
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V2, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V3, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V4, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V5, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V6, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V7, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V8, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V9, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V10, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+___
+
+ return $code;
+}
+
+# Load all 13 round keys to v1-v13 registers.
+sub aes_192_load_key {
+ my $KEYP = shift;
+
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V2, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V3, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V4, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V5, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V6, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V7, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V8, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V9, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V10, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+___
+
+ return $code;
+}
+
+# Load all 15 round keys to v1-v15 registers.
+sub aes_256_load_key {
+ my $KEYP = shift;
+
+ my $code=<<___;
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $V1, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V2, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V3, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V4, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V5, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V6, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V7, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V8, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V9, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V10, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V11, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V12, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V13, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V14, ($KEYP)
+ addi $KEYP, $KEYP, 16
+ vle32.v $V15, ($KEYP)
+___
+
+ return $code;
+}
+
+# aes-128 encryption with round keys v1-v11
+sub aes_128_encrypt {
+ my $code=<<___;
+ vaesz.vs $V24, $V1 # with round key w[ 0, 3]
+ vaesem.vs $V24, $V2 # with round key w[ 4, 7]
+ vaesem.vs $V24, $V3 # with round key w[ 8,11]
+ vaesem.vs $V24, $V4 # with round key w[12,15]
+ vaesem.vs $V24, $V5 # with round key w[16,19]
+ vaesem.vs $V24, $V6 # with round key w[20,23]
+ vaesem.vs $V24, $V7 # with round key w[24,27]
+ vaesem.vs $V24, $V8 # with round key w[28,31]
+ vaesem.vs $V24, $V9 # with round key w[32,35]
+ vaesem.vs $V24, $V10 # with round key w[36,39]
+ vaesef.vs $V24, $V11 # with round key w[40,43]
+___
+
+ return $code;
+}
+
+# aes-128 decryption with round keys v1-v11
+sub aes_128_decrypt {
+ my $code=<<___;
+ vaesz.vs $V24, $V11 # with round key w[40,43]
+ vaesdm.vs $V24, $V10 # with round key w[36,39]
+ vaesdm.vs $V24, $V9 # with round key w[32,35]
+ vaesdm.vs $V24, $V8 # with round key w[28,31]
+ vaesdm.vs $V24, $V7 # with round key w[24,27]
+ vaesdm.vs $V24, $V6 # with round key w[20,23]
+ vaesdm.vs $V24, $V5 # with round key w[16,19]
+ vaesdm.vs $V24, $V4 # with round key w[12,15]
+ vaesdm.vs $V24, $V3 # with round key w[ 8,11]
+ vaesdm.vs $V24, $V2 # with round key w[ 4, 7]
+ vaesdf.vs $V24, $V1 # with round key w[ 0, 3]
+___
+
+ return $code;
+}
+
+# aes-192 encryption with round keys v1-v13
+sub aes_192_encrypt {
+ my $code=<<___;
+ vaesz.vs $V24, $V1 # with round key w[ 0, 3]
+ vaesem.vs $V24, $V2 # with round key w[ 4, 7]
+ vaesem.vs $V24, $V3 # with round key w[ 8,11]
+ vaesem.vs $V24, $V4 # with round key w[12,15]
+ vaesem.vs $V24, $V5 # with round key w[16,19]
+ vaesem.vs $V24, $V6 # with round key w[20,23]
+ vaesem.vs $V24, $V7 # with round key w[24,27]
+ vaesem.vs $V24, $V8 # with round key w[28,31]
+ vaesem.vs $V24, $V9 # with round key w[32,35]
+ vaesem.vs $V24, $V10 # with round key w[36,39]
+ vaesem.vs $V24, $V11 # with round key w[40,43]
+ vaesem.vs $V24, $V12 # with round key w[44,47]
+ vaesef.vs $V24, $V13 # with round key w[48,51]
+___
+
+ return $code;
+}
+
+# aes-192 decryption with round keys v1-v13
+sub aes_192_decrypt {
+ my $code=<<___;
+ vaesz.vs $V24, $V13 # with round key w[48,51]
+ vaesdm.vs $V24, $V12 # with round key w[44,47]
+ vaesdm.vs $V24, $V11 # with round key w[40,43]
+ vaesdm.vs $V24, $V10 # with round key w[36,39]
+ vaesdm.vs $V24, $V9 # with round key w[32,35]
+ vaesdm.vs $V24, $V8 # with round key w[28,31]
+ vaesdm.vs $V24, $V7 # with round key w[24,27]
+ vaesdm.vs $V24, $V6 # with round key w[20,23]
+ vaesdm.vs $V24, $V5 # with round key w[16,19]
+ vaesdm.vs $V24, $V4 # with round key w[12,15]
+ vaesdm.vs $V24, $V3 # with round key w[ 8,11]
+ vaesdm.vs $V24, $V2 # with round key w[ 4, 7]
+ vaesdf.vs $V24, $V1 # with round key w[ 0, 3]
+___
+
+ return $code;
+}
+
+# aes-256 encryption with round keys v1-v15
+sub aes_256_encrypt {
+ my $code=<<___;
+ vaesz.vs $V24, $V1 # with round key w[ 0, 3]
+ vaesem.vs $V24, $V2 # with round key w[ 4, 7]
+ vaesem.vs $V24, $V3 # with round key w[ 8,11]
+ vaesem.vs $V24, $V4 # with round key w[12,15]
+ vaesem.vs $V24, $V5 # with round key w[16,19]
+ vaesem.vs $V24, $V6 # with round key w[20,23]
+ vaesem.vs $V24, $V7 # with round key w[24,27]
+ vaesem.vs $V24, $V8 # with round key w[28,31]
+ vaesem.vs $V24, $V9 # with round key w[32,35]
+ vaesem.vs $V24, $V10 # with round key w[36,39]
+ vaesem.vs $V24, $V11 # with round key w[40,43]
+ vaesem.vs $V24, $V12 # with round key w[44,47]
+ vaesem.vs $V24, $V13 # with round key w[48,51]
+ vaesem.vs $V24, $V14 # with round key w[52,55]
+ vaesef.vs $V24, $V15 # with round key w[56,59]
+___
+
+ return $code;
+}
+
+# aes-256 decryption with round keys v1-v15
+sub aes_256_decrypt {
+ my $code=<<___;
+ vaesz.vs $V24, $V15 # with round key w[56,59]
+ vaesdm.vs $V24, $V14 # with round key w[52,55]
+ vaesdm.vs $V24, $V13 # with round key w[48,51]
+ vaesdm.vs $V24, $V12 # with round key w[44,47]
+ vaesdm.vs $V24, $V11 # with round key w[40,43]
+ vaesdm.vs $V24, $V10 # with round key w[36,39]
+ vaesdm.vs $V24, $V9 # with round key w[32,35]
+ vaesdm.vs $V24, $V8 # with round key w[28,31]
+ vaesdm.vs $V24, $V7 # with round key w[24,27]
+ vaesdm.vs $V24, $V6 # with round key w[20,23]
+ vaesdm.vs $V24, $V5 # with round key w[16,19]
+ vaesdm.vs $V24, $V4 # with round key w[12,15]
+ vaesdm.vs $V24, $V3 # with round key w[ 8,11]
+ vaesdm.vs $V24, $V2 # with round key w[ 4, 7]
+ vaesdf.vs $V24, $V1 # with round key w[ 0, 3]
+___
+
+ return $code;
+}
+
+{
+###############################################################################
+# void rv64i_zvkned_cbc_encrypt(const unsigned char *in, unsigned char *out,
+# size_t length, const AES_KEY *key,
+# unsigned char *ivec, const int enc);
+my ($INP, $OUTP, $LEN, $KEYP, $IVP, $ENC) = ("a0", "a1", "a2", "a3", "a4", "a5");
+my ($T0, $T1) = ("t0", "t1", "t2");
+
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkned_cbc_encrypt
+.type rv64i_zvkned_cbc_encrypt,\@function
+rv64i_zvkned_cbc_encrypt:
+ # check whether the length is a multiple of 16 and >= 16
+ li $T1, 16
+ blt $LEN, $T1, L_end
+ andi $T1, $LEN, 15
+ bnez $T1, L_end
+
+ # Load key length.
+ lwu $T0, 480($KEYP)
+
+ # Get proper routine for key length.
+ li $T1, 16
+ beq $T1, $T0, L_cbc_enc_128
+
+ li $T1, 24
+ beq $T1, $T0, L_cbc_enc_192
+
+ li $T1, 32
+ beq $T1, $T0, L_cbc_enc_256
+
+ ret
+.size rv64i_zvkned_cbc_encrypt,.-rv64i_zvkned_cbc_encrypt
+___
+
+$code .= <<___;
+.p2align 3
+L_cbc_enc_128:
+ # Load all 11 round keys to v1-v11 registers.
+ @{[aes_128_load_key $KEYP]}
+
+ # Load IV.
+ vle32.v $V16, ($IVP)
+
+ vle32.v $V24, ($INP)
+ vxor.vv $V24, $V24, $V16
+ j 2f
+
+1:
+ vle32.v $V17, ($INP)
+ vxor.vv $V24, $V24, $V17
+
+2:
+ # AES body
+ @{[aes_128_encrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ addi $INP, $INP, 16
+ addi $OUTP, $OUTP, 16
+ addi $LEN, $LEN, -16
+
+ bnez $LEN, 1b
+
+ vse32.v $V24, ($IVP)
+
+ ret
+.size L_cbc_enc_128,.-L_cbc_enc_128
+___
+
+$code .= <<___;
+.p2align 3
+L_cbc_enc_192:
+ # Load all 13 round keys to v1-v13 registers.
+ @{[aes_192_load_key $KEYP]}
+
+ # Load IV.
+ vle32.v $V16, ($IVP)
+
+ vle32.v $V24, ($INP)
+ vxor.vv $V24, $V24, $V16
+ j 2f
+
+1:
+ vle32.v $V17, ($INP)
+ vxor.vv $V24, $V24, $V17
+
+2:
+ # AES body
+ @{[aes_192_encrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ addi $INP, $INP, 16
+ addi $OUTP, $OUTP, 16
+ addi $LEN, $LEN, -16
+
+ bnez $LEN, 1b
+
+ vse32.v $V24, ($IVP)
+
+ ret
+.size L_cbc_enc_192,.-L_cbc_enc_192
+___
+
+$code .= <<___;
+.p2align 3
+L_cbc_enc_256:
+ # Load all 15 round keys to v1-v15 registers.
+ @{[aes_256_load_key $KEYP]}
+
+ # Load IV.
+ vle32.v $V16, ($IVP)
+
+ vle32.v $V24, ($INP)
+ vxor.vv $V24, $V24, $V16
+ j 2f
+
+1:
+ vle32.v $V17, ($INP)
+ vxor.vv $V24, $V24, $V17
+
+2:
+ # AES body
+ @{[aes_256_encrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ addi $INP, $INP, 16
+ addi $OUTP, $OUTP, 16
+ addi $LEN, $LEN, -16
+
+ bnez $LEN, 1b
+
+ vse32.v $V24, ($IVP)
+
+ ret
+.size L_cbc_enc_256,.-L_cbc_enc_256
+___
+
+###############################################################################
+# void rv64i_zvkned_cbc_decrypt(const unsigned char *in, unsigned char *out,
+# size_t length, const AES_KEY *key,
+# unsigned char *ivec, const int enc);
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkned_cbc_decrypt
+.type rv64i_zvkned_cbc_decrypt,\@function
+rv64i_zvkned_cbc_decrypt:
+ # check whether the length is a multiple of 16 and >= 16
+ li $T1, 16
+ blt $LEN, $T1, L_end
+ andi $T1, $LEN, 15
+ bnez $T1, L_end
+
+ # Load key length.
+ lwu $T0, 480($KEYP)
+
+ # Get proper routine for key length.
+ li $T1, 16
+ beq $T1, $T0, L_cbc_dec_128
+
+ li $T1, 24
+ beq $T1, $T0, L_cbc_dec_192
+
+ li $T1, 32
+ beq $T1, $T0, L_cbc_dec_256
+
+ ret
+.size rv64i_zvkned_cbc_decrypt,.-rv64i_zvkned_cbc_decrypt
+___
+
+$code .= <<___;
+.p2align 3
+L_cbc_dec_128:
+ # Load all 11 round keys to v1-v11 registers.
+ @{[aes_128_load_key $KEYP]}
+
+ # Load IV.
+ vle32.v $V16, ($IVP)
+
+ vle32.v $V24, ($INP)
+ vmv.v.v $V17, $V24
+ j 2f
+
+1:
+ vle32.v $V24, ($INP)
+ vmv.v.v $V17, $V24
+ addi $OUTP, $OUTP, 16
+
+2:
+ # AES body
+ @{[aes_128_decrypt]}
+
+ vxor.vv $V24, $V24, $V16
+ vse32.v $V24, ($OUTP)
+ vmv.v.v $V16, $V17
+
+ addi $LEN, $LEN, -16
+ addi $INP, $INP, 16
+
+ bnez $LEN, 1b
+
+ vse32.v $V16, ($IVP)
+
+ ret
+.size L_cbc_dec_128,.-L_cbc_dec_128
+___
+
+$code .= <<___;
+.p2align 3
+L_cbc_dec_192:
+ # Load all 13 round keys to v1-v13 registers.
+ @{[aes_192_load_key $KEYP]}
+
+ # Load IV.
+ vle32.v $V16, ($IVP)
+
+ vle32.v $V24, ($INP)
+ vmv.v.v $V17, $V24
+ j 2f
+
+1:
+ vle32.v $V24, ($INP)
+ vmv.v.v $V17, $V24
+ addi $OUTP, $OUTP, 16
+
+2:
+ # AES body
+ @{[aes_192_decrypt]}
+
+ vxor.vv $V24, $V24, $V16
+ vse32.v $V24, ($OUTP)
+ vmv.v.v $V16, $V17
+
+ addi $LEN, $LEN, -16
+ addi $INP, $INP, 16
+
+ bnez $LEN, 1b
+
+ vse32.v $V16, ($IVP)
+
+ ret
+.size L_cbc_dec_192,.-L_cbc_dec_192
+___
+
+$code .= <<___;
+.p2align 3
+L_cbc_dec_256:
+ # Load all 15 round keys to v1-v15 registers.
+ @{[aes_256_load_key $KEYP]}
+
+ # Load IV.
+ vle32.v $V16, ($IVP)
+
+ vle32.v $V24, ($INP)
+ vmv.v.v $V17, $V24
+ j 2f
+
+1:
+ vle32.v $V24, ($INP)
+ vmv.v.v $V17, $V24
+ addi $OUTP, $OUTP, 16
+
+2:
+ # AES body
+ @{[aes_256_decrypt]}
+
+ vxor.vv $V24, $V24, $V16
+ vse32.v $V24, ($OUTP)
+ vmv.v.v $V16, $V17
+
+ addi $LEN, $LEN, -16
+ addi $INP, $INP, 16
+
+ bnez $LEN, 1b
+
+ vse32.v $V16, ($IVP)
+
+ ret
+.size L_cbc_dec_256,.-L_cbc_dec_256
+___
+}
+
+{
+###############################################################################
+# void rv64i_zvkned_ecb_encrypt(const unsigned char *in, unsigned char *out,
+# size_t length, const AES_KEY *key,
+# const int enc);
+my ($INP, $OUTP, $LEN, $KEYP, $ENC) = ("a0", "a1", "a2", "a3", "a4");
+my ($VL) = ("a5");
+my ($LEN32) = ("a6");
+my ($T0, $T1) = ("t0", "t1");
+
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkned_ecb_encrypt
+.type rv64i_zvkned_ecb_encrypt,\@function
+rv64i_zvkned_ecb_encrypt:
+ # Make the LEN become e32 length.
+ srli $LEN32, $LEN, 2
+
+ # Load key length.
+ lwu $T0, 480($KEYP)
+
+ # Get proper routine for key length.
+ li $T1, 16
+ beq $T1, $T0, L_ecb_enc_128
+
+ li $T1, 24
+ beq $T1, $T0, L_ecb_enc_192
+
+ li $T1, 32
+ beq $T1, $T0, L_ecb_enc_256
+
+ ret
+.size rv64i_zvkned_ecb_encrypt,.-rv64i_zvkned_ecb_encrypt
+___
+
+$code .= <<___;
+.p2align 3
+L_ecb_enc_128:
+ # Load all 11 round keys to v1-v11 registers.
+ @{[aes_128_load_key $KEYP]}
+
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ vle32.v $V24, ($INP)
+
+ # AES body
+ @{[aes_128_encrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ add $INP, $INP, $T0
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN32, 1b
+
+ ret
+.size L_ecb_enc_128,.-L_ecb_enc_128
+___
+
+$code .= <<___;
+.p2align 3
+L_ecb_enc_192:
+ # Load all 13 round keys to v1-v13 registers.
+ @{[aes_192_load_key $KEYP]}
+
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ vle32.v $V24, ($INP)
+
+ # AES body
+ @{[aes_192_encrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ add $INP, $INP, $T0
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN32, 1b
+
+ ret
+.size L_ecb_enc_192,.-L_ecb_enc_192
+___
+
+$code .= <<___;
+.p2align 3
+L_ecb_enc_256:
+ # Load all 15 round keys to v1-v15 registers.
+ @{[aes_256_load_key $KEYP]}
+
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ vle32.v $V24, ($INP)
+
+ # AES body
+ @{[aes_256_encrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ add $INP, $INP, $T0
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN32, 1b
+
+ ret
+.size L_ecb_enc_256,.-L_ecb_enc_256
+___
+
+###############################################################################
+# void rv64i_zvkned_ecb_decrypt(const unsigned char *in, unsigned char *out,
+# size_t length, const AES_KEY *key,
+# const int enc);
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvkned_ecb_decrypt
+.type rv64i_zvkned_ecb_decrypt,\@function
+rv64i_zvkned_ecb_decrypt:
+ # Make the LEN become e32 length.
+ srli $LEN32, $LEN, 2
+
+ # Load key length.
+ lwu $T0, 480($KEYP)
+
+ # Get proper routine for key length.
+ li $T1, 16
+ beq $T1, $T0, L_ecb_dec_128
+
+ li $T1, 24
+ beq $T1, $T0, L_ecb_dec_192
+
+ li $T1, 32
+ beq $T1, $T0, L_ecb_dec_256
+
+ ret
+.size rv64i_zvkned_ecb_decrypt,.-rv64i_zvkned_ecb_decrypt
+___
+
+$code .= <<___;
+.p2align 3
+L_ecb_dec_128:
+ # Load all 11 round keys to v1-v11 registers.
+ @{[aes_128_load_key $KEYP]}
+
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ vle32.v $V24, ($INP)
+
+ # AES body
+ @{[aes_128_decrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ add $INP, $INP, $T0
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN32, 1b
+
+ ret
+.size L_ecb_dec_128,.-L_ecb_dec_128
+___
+
+$code .= <<___;
+.p2align 3
+L_ecb_dec_192:
+ # Load all 13 round keys to v1-v13 registers.
+ @{[aes_192_load_key $KEYP]}
+
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ vle32.v $V24, ($INP)
+
+ # AES body
+ @{[aes_192_decrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ add $INP, $INP, $T0
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN32, 1b
+
+ ret
+.size L_ecb_dec_192,.-L_ecb_dec_192
+___
+
+$code .= <<___;
+.p2align 3
+L_ecb_dec_256:
+ # Load all 15 round keys to v1-v15 registers.
+ @{[aes_256_load_key $KEYP]}
+
+1:
+ vsetvli $VL, $LEN32, e32, m4, ta, ma
+ slli $T0, $VL, 2
+ sub $LEN32, $LEN32, $VL
+
+ vle32.v $V24, ($INP)
+
+ # AES body
+ @{[aes_256_decrypt]}
+
+ vse32.v $V24, ($OUTP)
+
+ add $INP, $INP, $T0
+ add $OUTP, $OUTP, $T0
+
+ bnez $LEN32, 1b
+
+ ret
+.size L_ecb_dec_256,.-L_ecb_dec_256
+___
+}
+
{
################################################################################
# void rv64i_zvkned_encrypt(const unsigned char *in, unsigned char *out,
--
2.28.0
Add a gcm hash implementation using the Zvkg extension from OpenSSL
(openssl/openssl#21923).
The perlasm here is different from the original implementation in OpenSSL.
The OpenSSL assumes that the H is stored in little-endian. Thus, it needs
to convert the H to big-endian for Zvkg instructions. In kernel, we have
the big-endian H directly. There is no need for endian conversion.
Co-developed-by: Christoph Müllner <[email protected]>
Signed-off-by: Christoph Müllner <[email protected]>
Co-developed-by: Heiko Stuebner <[email protected]>
Signed-off-by: Heiko Stuebner <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
Changelog v4:
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
Changelog v3:
- Use asm mnemonics for the instructions in RVV 1.0 extension.
Changelog v2:
- Do not turn on kconfig `GHASH_RISCV64` option by default.
- Add `asmlinkage` qualifier for crypto asm function.
- Update the ghash fallback path in ghash_blocks().
- Rename structure riscv64_ghash_context to riscv64_ghash_tfm_ctx.
- Fold ghash_update_zvkg() and ghash_final_zvkg().
- Reorder structure riscv64_ghash_alg_zvkg members initialization in the
order declared.
---
arch/riscv/crypto/Kconfig | 10 ++
arch/riscv/crypto/Makefile | 7 +
arch/riscv/crypto/ghash-riscv64-glue.c | 175 ++++++++++++++++++++++++
arch/riscv/crypto/ghash-riscv64-zvkg.pl | 100 ++++++++++++++
4 files changed, 292 insertions(+)
create mode 100644 arch/riscv/crypto/ghash-riscv64-glue.c
create mode 100644 arch/riscv/crypto/ghash-riscv64-zvkg.pl
diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
index 2cee0f68f0c7..d73b89ceb1a3 100644
--- a/arch/riscv/crypto/Kconfig
+++ b/arch/riscv/crypto/Kconfig
@@ -34,4 +34,14 @@ config CRYPTO_AES_BLOCK_RISCV64
- Zvkb vector crypto extension (CTR/XTS)
- Zvkg vector crypto extension (XTS)
+config CRYPTO_GHASH_RISCV64
+ tristate "Hash functions: GHASH"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_GCM
+ help
+ GCM GHASH function (NIST SP 800-38D)
+
+ Architecture: riscv64 using:
+ - Zvkg vector crypto extension
+
endmenu
diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile
index 9574b009762f..94a7f8eaa8a7 100644
--- a/arch/riscv/crypto/Makefile
+++ b/arch/riscv/crypto/Makefile
@@ -9,6 +9,9 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o
obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o
aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o
+obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o
+ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o
+
quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) void $(@)
@@ -21,6 +24,10 @@ $(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl
$(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl
$(call cmd,perlasm)
+$(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl
+ $(call cmd,perlasm)
+
clean-files += aes-riscv64-zvkned.S
clean-files += aes-riscv64-zvkned-zvbb-zvkg.S
clean-files += aes-riscv64-zvkned-zvkb.S
+clean-files += ghash-riscv64-zvkg.S
diff --git a/arch/riscv/crypto/ghash-riscv64-glue.c b/arch/riscv/crypto/ghash-riscv64-glue.c
new file mode 100644
index 000000000000..b01ab5714677
--- /dev/null
+++ b/arch/riscv/crypto/ghash-riscv64-glue.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * RISC-V optimized GHASH routines
+ *
+ * Copyright (C) 2023 VRULL GmbH
+ * Author: Heiko Stuebner <[email protected]>
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Author: Jerry Shih <[email protected]>
+ */
+
+#include <asm/simd.h>
+#include <asm/vector.h>
+#include <crypto/ghash.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <linux/crypto.h>
+#include <linux/linkage.h>
+#include <linux/module.h>
+#include <linux/types.h>
+
+/* ghash using zvkg vector crypto extension */
+asmlinkage void gcm_ghash_rv64i_zvkg(be128 *Xi, const be128 *H, const u8 *inp,
+ size_t len);
+
+struct riscv64_ghash_tfm_ctx {
+ be128 key;
+};
+
+struct riscv64_ghash_desc_ctx {
+ be128 shash;
+ u8 buffer[GHASH_BLOCK_SIZE];
+ u32 bytes;
+};
+
+static inline void ghash_blocks(const struct riscv64_ghash_tfm_ctx *tctx,
+ struct riscv64_ghash_desc_ctx *dctx,
+ const u8 *src, size_t srclen)
+{
+ /* The srclen is nonzero and a multiple of 16. */
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ gcm_ghash_rv64i_zvkg(&dctx->shash, &tctx->key, src, srclen);
+ kernel_vector_end();
+ } else {
+ do {
+ crypto_xor((u8 *)&dctx->shash, src, GHASH_BLOCK_SIZE);
+ gf128mul_lle(&dctx->shash, &tctx->key);
+ srclen -= GHASH_BLOCK_SIZE;
+ src += GHASH_BLOCK_SIZE;
+ } while (srclen);
+ }
+}
+
+static int ghash_init(struct shash_desc *desc)
+{
+ struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+
+ *dctx = (struct riscv64_ghash_desc_ctx){};
+
+ return 0;
+}
+
+static int ghash_update_zvkg(struct shash_desc *desc, const u8 *src,
+ unsigned int srclen)
+{
+ size_t len;
+ const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+
+ if (dctx->bytes) {
+ if (dctx->bytes + srclen < GHASH_BLOCK_SIZE) {
+ memcpy(dctx->buffer + dctx->bytes, src, srclen);
+ dctx->bytes += srclen;
+ return 0;
+ }
+ memcpy(dctx->buffer + dctx->bytes, src,
+ GHASH_BLOCK_SIZE - dctx->bytes);
+
+ ghash_blocks(tctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE);
+
+ src += GHASH_BLOCK_SIZE - dctx->bytes;
+ srclen -= GHASH_BLOCK_SIZE - dctx->bytes;
+ dctx->bytes = 0;
+ }
+ len = srclen & ~(GHASH_BLOCK_SIZE - 1);
+
+ if (len) {
+ ghash_blocks(tctx, dctx, src, len);
+ src += len;
+ srclen -= len;
+ }
+
+ if (srclen) {
+ memcpy(dctx->buffer, src, srclen);
+ dctx->bytes = srclen;
+ }
+
+ return 0;
+}
+
+static int ghash_final_zvkg(struct shash_desc *desc, u8 *out)
+{
+ const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ int i;
+
+ if (dctx->bytes) {
+ for (i = dctx->bytes; i < GHASH_BLOCK_SIZE; i++)
+ dctx->buffer[i] = 0;
+
+ ghash_blocks(tctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE);
+ }
+
+ memcpy(out, &dctx->shash, GHASH_DIGEST_SIZE);
+
+ return 0;
+}
+
+static int ghash_setkey(struct crypto_shash *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+
+ if (keylen != GHASH_BLOCK_SIZE)
+ return -EINVAL;
+
+ memcpy(&tctx->key, key, GHASH_BLOCK_SIZE);
+
+ return 0;
+}
+
+static struct shash_alg riscv64_ghash_alg_zvkg = {
+ .init = ghash_init,
+ .update = ghash_update_zvkg,
+ .final = ghash_final_zvkg,
+ .setkey = ghash_setkey,
+ .descsize = sizeof(struct riscv64_ghash_desc_ctx),
+ .digestsize = GHASH_DIGEST_SIZE,
+ .base = {
+ .cra_blocksize = GHASH_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct riscv64_ghash_tfm_ctx),
+ .cra_priority = 303,
+ .cra_name = "ghash",
+ .cra_driver_name = "ghash-riscv64-zvkg",
+ .cra_module = THIS_MODULE,
+ },
+};
+
+static inline bool check_ghash_ext(void)
+{
+ return riscv_isa_extension_available(NULL, ZVKG) &&
+ riscv_vector_vlen() >= 128;
+}
+
+static int __init riscv64_ghash_mod_init(void)
+{
+ if (check_ghash_ext())
+ return crypto_register_shash(&riscv64_ghash_alg_zvkg);
+
+ return -ENODEV;
+}
+
+static void __exit riscv64_ghash_mod_fini(void)
+{
+ crypto_unregister_shash(&riscv64_ghash_alg_zvkg);
+}
+
+module_init(riscv64_ghash_mod_init);
+module_exit(riscv64_ghash_mod_fini);
+
+MODULE_DESCRIPTION("GCM GHASH (RISC-V accelerated)");
+MODULE_AUTHOR("Heiko Stuebner <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("ghash");
diff --git a/arch/riscv/crypto/ghash-riscv64-zvkg.pl b/arch/riscv/crypto/ghash-riscv64-zvkg.pl
new file mode 100644
index 000000000000..f18824496573
--- /dev/null
+++ b/arch/riscv/crypto/ghash-riscv64-zvkg.pl
@@ -0,0 +1,100 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Christoph Müllner <[email protected]>
+# Copyright (c) 2023, Jerry Shih <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# - RV64I
+# - RISC-V Vector ('V') with VLEN >= 128
+# - RISC-V Vector GCM/GMAC extension ('Zvkg')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+.text
+.option arch, +zvkg
+___
+
+###############################################################################
+# void gcm_ghash_rv64i_zvkg(be128 *Xi, const be128 *H, const u8 *inp, size_t len)
+#
+# input: Xi: current hash value
+# H: hash key
+# inp: pointer to input data
+# len: length of input data in bytes (multiple of block size)
+# output: Xi: Xi+1 (next hash value Xi)
+{
+my ($Xi,$H,$inp,$len) = ("a0","a1","a2","a3");
+my ($vXi,$vH,$vinp,$Vzero) = ("v1","v2","v3","v4");
+
+$code .= <<___;
+.p2align 3
+.globl gcm_ghash_rv64i_zvkg
+.type gcm_ghash_rv64i_zvkg,\@function
+gcm_ghash_rv64i_zvkg:
+ vsetivli zero, 4, e32, m1, ta, ma
+ vle32.v $vH, ($H)
+ vle32.v $vXi, ($Xi)
+
+Lstep:
+ vle32.v $vinp, ($inp)
+ add $inp, $inp, 16
+ add $len, $len, -16
+ vghsh.vv $vXi, $vH, $vinp
+ bnez $len, Lstep
+
+ vse32.v $vXi, ($Xi)
+ ret
+
+.size gcm_ghash_rv64i_zvkg,.-gcm_ghash_rv64i_zvkg
+___
+}
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
--
2.28.0
Add SHA384 and 512 implementations using Zvknhb vector crypto extension
from OpenSSL(openssl/openssl#21923).
Co-developed-by: Charalampos Mitrodimas <[email protected]>
Signed-off-by: Charalampos Mitrodimas <[email protected]>
Co-developed-by: Heiko Stuebner <[email protected]>
Signed-off-by: Heiko Stuebner <[email protected]>
Co-developed-by: Phoebe Chen <[email protected]>
Signed-off-by: Phoebe Chen <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
Changelog v4:
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
Changelog v3:
- Use `SYM_TYPED_FUNC_START` for sha512 indirect-call asm symbol.
- Use asm mnemonics for the instructions in RVV 1.0 extension.
Changelog v2:
- Do not turn on kconfig `SHA512_RISCV64` option by default.
- Add `asmlinkage` qualifier for crypto asm function.
- Rename sha512-riscv64-zvkb-zvknhb to sha512-riscv64-zvknhb-zvkb.
- Reorder structure sha512_algs members initialization in the order
declared.
---
arch/riscv/crypto/Kconfig | 11 +
arch/riscv/crypto/Makefile | 7 +
arch/riscv/crypto/sha512-riscv64-glue.c | 139 +++++++++
.../crypto/sha512-riscv64-zvknhb-zvkb.pl | 265 ++++++++++++++++++
4 files changed, 422 insertions(+)
create mode 100644 arch/riscv/crypto/sha512-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl
diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
index ff1dce4a2bcc..1604782c0eed 100644
--- a/arch/riscv/crypto/Kconfig
+++ b/arch/riscv/crypto/Kconfig
@@ -55,4 +55,15 @@ config CRYPTO_SHA256_RISCV64
- Zvknha or Zvknhb vector crypto extensions
- Zvkb vector crypto extension
+config CRYPTO_SHA512_RISCV64
+ tristate "Hash functions: SHA-384 and SHA-512"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_SHA512
+ help
+ SHA-384 and SHA-512 secure hash algorithm (FIPS 180)
+
+ Architecture: riscv64 using:
+ - Zvknhb vector crypto extension
+ - Zvkb vector crypto extension
+
endmenu
diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile
index e9d7717ec943..8aabef950ad3 100644
--- a/arch/riscv/crypto/Makefile
+++ b/arch/riscv/crypto/Makefile
@@ -15,6 +15,9 @@ ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o
obj-$(CONFIG_CRYPTO_SHA256_RISCV64) += sha256-riscv64.o
sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o
+obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o
+sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o
+
quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) void $(@)
@@ -33,8 +36,12 @@ $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl
$(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl
$(call cmd,perlasm)
+$(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl
+ $(call cmd,perlasm)
+
clean-files += aes-riscv64-zvkned.S
clean-files += aes-riscv64-zvkned-zvbb-zvkg.S
clean-files += aes-riscv64-zvkned-zvkb.S
clean-files += ghash-riscv64-zvkg.S
clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S
+clean-files += sha512-riscv64-zvknhb-zvkb.S
diff --git a/arch/riscv/crypto/sha512-riscv64-glue.c b/arch/riscv/crypto/sha512-riscv64-glue.c
new file mode 100644
index 000000000000..3dd8e1c9d402
--- /dev/null
+++ b/arch/riscv/crypto/sha512-riscv64-glue.c
@@ -0,0 +1,139 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Linux/riscv64 port of the OpenSSL SHA512 implementation for RISC-V 64
+ *
+ * Copyright (C) 2023 VRULL GmbH
+ * Author: Heiko Stuebner <[email protected]>
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Author: Jerry Shih <[email protected]>
+ */
+
+#include <asm/simd.h>
+#include <asm/vector.h>
+#include <linux/linkage.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/sha512_base.h>
+
+/*
+ * sha512 using zvkb and zvknhb vector crypto extension
+ *
+ * This asm function will just take the first 512-bit as the sha512 state from
+ * the pointer to `struct sha512_state`.
+ */
+asmlinkage void sha512_block_data_order_zvkb_zvknhb(struct sha512_state *digest,
+ const u8 *data,
+ int num_blks);
+
+static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
+{
+ int ret = 0;
+
+ /*
+ * Make sure struct sha512_state begins directly with the SHA512
+ * 512-bit internal state, as this is what the asm function expect.
+ */
+ BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0);
+
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ ret = sha512_base_do_update(
+ desc, data, len, sha512_block_data_order_zvkb_zvknhb);
+ kernel_vector_end();
+ } else {
+ ret = crypto_sha512_update(desc, data, len);
+ }
+
+ return ret;
+}
+
+static int riscv64_sha512_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ if (len)
+ sha512_base_do_update(
+ desc, data, len,
+ sha512_block_data_order_zvkb_zvknhb);
+ sha512_base_do_finalize(desc,
+ sha512_block_data_order_zvkb_zvknhb);
+ kernel_vector_end();
+
+ return sha512_base_finish(desc, out);
+ }
+
+ return crypto_sha512_finup(desc, data, len, out);
+}
+
+static int riscv64_sha512_final(struct shash_desc *desc, u8 *out)
+{
+ return riscv64_sha512_finup(desc, NULL, 0, out);
+}
+
+static struct shash_alg sha512_algs[] = {
+ {
+ .init = sha512_base_init,
+ .update = riscv64_sha512_update,
+ .final = riscv64_sha512_final,
+ .finup = riscv64_sha512_finup,
+ .descsize = sizeof(struct sha512_state),
+ .digestsize = SHA512_DIGEST_SIZE,
+ .base = {
+ .cra_blocksize = SHA512_BLOCK_SIZE,
+ .cra_priority = 150,
+ .cra_name = "sha512",
+ .cra_driver_name = "sha512-riscv64-zvknhb-zvkb",
+ .cra_module = THIS_MODULE,
+ },
+ },
+ {
+ .init = sha384_base_init,
+ .update = riscv64_sha512_update,
+ .final = riscv64_sha512_final,
+ .finup = riscv64_sha512_finup,
+ .descsize = sizeof(struct sha512_state),
+ .digestsize = SHA384_DIGEST_SIZE,
+ .base = {
+ .cra_blocksize = SHA384_BLOCK_SIZE,
+ .cra_priority = 150,
+ .cra_name = "sha384",
+ .cra_driver_name = "sha384-riscv64-zvknhb-zvkb",
+ .cra_module = THIS_MODULE,
+ },
+ },
+};
+
+static inline bool check_sha512_ext(void)
+{
+ return riscv_isa_extension_available(NULL, ZVKNHB) &&
+ riscv_isa_extension_available(NULL, ZVKB) &&
+ riscv_vector_vlen() >= 128;
+}
+
+static int __init riscv64_sha512_mod_init(void)
+{
+ if (check_sha512_ext())
+ return crypto_register_shashes(sha512_algs,
+ ARRAY_SIZE(sha512_algs));
+
+ return -ENODEV;
+}
+
+static void __exit riscv64_sha512_mod_fini(void)
+{
+ crypto_unregister_shashes(sha512_algs, ARRAY_SIZE(sha512_algs));
+}
+
+module_init(riscv64_sha512_mod_init);
+module_exit(riscv64_sha512_mod_fini);
+
+MODULE_DESCRIPTION("SHA-512 (RISC-V accelerated)");
+MODULE_AUTHOR("Heiko Stuebner <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("sha384");
+MODULE_ALIAS_CRYPTO("sha512");
diff --git a/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl b/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl
new file mode 100644
index 000000000000..cab46ccd1fe2
--- /dev/null
+++ b/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl
@@ -0,0 +1,265 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Christoph Müllner <[email protected]>
+# Copyright (c) 2023, Phoebe Chen <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# The generated code of this file depends on the following RISC-V extensions:
+# - RV64I
+# - RISC-V vector ('V') with VLEN >= 128
+# - RISC-V Vector SHA-2 Secure Hash extension ('Zvknhb')
+# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+#include <linux/cfi_types.h>
+
+.text
+.option arch, +zvknhb, +zvkb
+___
+
+my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7,
+ $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15,
+ $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23,
+ $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31,
+) = map("v$_",(0..31));
+
+my $K512 = "K512";
+
+# Function arguments
+my ($H, $INP, $LEN, $KT, $H2, $INDEX_PATTERN) = ("a0", "a1", "a2", "a3", "t3", "t4");
+
+################################################################################
+# void sha512_block_data_order_zvkb_zvknhb(void *c, const void *p, size_t len)
+$code .= <<___;
+SYM_TYPED_FUNC_START(sha512_block_data_order_zvkb_zvknhb)
+ vsetivli zero, 4, e64, m2, ta, ma
+
+ # H is stored as {a,b,c,d},{e,f,g,h}, but we need {f,e,b,a},{h,g,d,c}
+ # The dst vtype is e64m2 and the index vtype is e8mf4.
+ # We use index-load with the following index pattern at v1.
+ # i8 index:
+ # 40, 32, 8, 0
+ # Instead of setting the i8 index, we could use a single 32bit
+ # little-endian value to cover the 4xi8 index.
+ # i32 value:
+ # 0x 00 08 20 28
+ li $INDEX_PATTERN, 0x00082028
+ vsetivli zero, 1, e32, m1, ta, ma
+ vmv.v.x $V1, $INDEX_PATTERN
+
+ addi $H2, $H, 16
+
+ # Use index-load to get {f,e,b,a},{h,g,d,c}
+ vsetivli zero, 4, e64, m2, ta, ma
+ vluxei8.v $V22, ($H), $V1
+ vluxei8.v $V24, ($H2), $V1
+
+ # Setup v0 mask for the vmerge to replace the first word (idx==0) in key-scheduling.
+ # The AVL is 4 in SHA, so we could use a single e8(8 element masking) for masking.
+ vsetivli zero, 1, e8, m1, ta, ma
+ vmv.v.i $V0, 0x01
+
+ vsetivli zero, 4, e64, m2, ta, ma
+
+L_round_loop:
+ # Load round constants K512
+ la $KT, $K512
+
+ # Decrement length by 1
+ addi $LEN, $LEN, -1
+
+ # Keep the current state as we need it later: H' = H+{a',b',c',...,h'}.
+ vmv.v.v $V26, $V22
+ vmv.v.v $V28, $V24
+
+ # Load the 1024-bits of the message block in v10-v16 and perform the endian
+ # swap.
+ vle64.v $V10, ($INP)
+ vrev8.v $V10, $V10
+ addi $INP, $INP, 32
+ vle64.v $V12, ($INP)
+ vrev8.v $V12, $V12
+ addi $INP, $INP, 32
+ vle64.v $V14, ($INP)
+ vrev8.v $V14, $V14
+ addi $INP, $INP, 32
+ vle64.v $V16, ($INP)
+ vrev8.v $V16, $V16
+ addi $INP, $INP, 32
+
+ .rept 4
+ # Quad-round 0 (+0, v10->v12->v14->v16)
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V10
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+ vmerge.vvm $V18, $V14, $V12, $V0
+ vsha2ms.vv $V10, $V18, $V16
+
+ # Quad-round 1 (+1, v12->v14->v16->v10)
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V12
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+ vmerge.vvm $V18, $V16, $V14, $V0
+ vsha2ms.vv $V12, $V18, $V10
+
+ # Quad-round 2 (+2, v14->v16->v10->v12)
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V14
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+ vmerge.vvm $V18, $V10, $V16, $V0
+ vsha2ms.vv $V14, $V18, $V12
+
+ # Quad-round 3 (+3, v16->v10->v12->v14)
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V16
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+ vmerge.vvm $V18, $V12, $V10, $V0
+ vsha2ms.vv $V16, $V18, $V14
+ .endr
+
+ # Quad-round 16 (+0, v10->v12->v14->v16)
+ # Note that we stop generating new message schedule words (Wt, v10-16)
+ # as we already generated all the words we end up consuming (i.e., W[79:76]).
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V10
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+
+ # Quad-round 17 (+1, v12->v14->v16->v10)
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V12
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+
+ # Quad-round 18 (+2, v14->v16->v10->v12)
+ vle64.v $V20, ($KT)
+ addi $KT, $KT, 32
+ vadd.vv $V18, $V20, $V14
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+
+ # Quad-round 19 (+3, v16->v10->v12->v14)
+ vle64.v $V20, ($KT)
+ # No t1 increment needed.
+ vadd.vv $V18, $V20, $V16
+ vsha2cl.vv $V24, $V22, $V18
+ vsha2ch.vv $V22, $V24, $V18
+
+ # H' = H+{a',b',c',...,h'}
+ vadd.vv $V22, $V26, $V22
+ vadd.vv $V24, $V28, $V24
+ bnez $LEN, L_round_loop
+
+ # Store {f,e,b,a},{h,g,d,c} back to {a,b,c,d},{e,f,g,h}.
+ vsuxei8.v $V22, ($H), $V1
+ vsuxei8.v $V24, ($H2), $V1
+
+ ret
+SYM_FUNC_END(sha512_block_data_order_zvkb_zvknhb)
+
+.p2align 3
+.type $K512,\@object
+$K512:
+ .dword 0x428a2f98d728ae22, 0x7137449123ef65cd
+ .dword 0xb5c0fbcfec4d3b2f, 0xe9b5dba58189dbbc
+ .dword 0x3956c25bf348b538, 0x59f111f1b605d019
+ .dword 0x923f82a4af194f9b, 0xab1c5ed5da6d8118
+ .dword 0xd807aa98a3030242, 0x12835b0145706fbe
+ .dword 0x243185be4ee4b28c, 0x550c7dc3d5ffb4e2
+ .dword 0x72be5d74f27b896f, 0x80deb1fe3b1696b1
+ .dword 0x9bdc06a725c71235, 0xc19bf174cf692694
+ .dword 0xe49b69c19ef14ad2, 0xefbe4786384f25e3
+ .dword 0x0fc19dc68b8cd5b5, 0x240ca1cc77ac9c65
+ .dword 0x2de92c6f592b0275, 0x4a7484aa6ea6e483
+ .dword 0x5cb0a9dcbd41fbd4, 0x76f988da831153b5
+ .dword 0x983e5152ee66dfab, 0xa831c66d2db43210
+ .dword 0xb00327c898fb213f, 0xbf597fc7beef0ee4
+ .dword 0xc6e00bf33da88fc2, 0xd5a79147930aa725
+ .dword 0x06ca6351e003826f, 0x142929670a0e6e70
+ .dword 0x27b70a8546d22ffc, 0x2e1b21385c26c926
+ .dword 0x4d2c6dfc5ac42aed, 0x53380d139d95b3df
+ .dword 0x650a73548baf63de, 0x766a0abb3c77b2a8
+ .dword 0x81c2c92e47edaee6, 0x92722c851482353b
+ .dword 0xa2bfe8a14cf10364, 0xa81a664bbc423001
+ .dword 0xc24b8b70d0f89791, 0xc76c51a30654be30
+ .dword 0xd192e819d6ef5218, 0xd69906245565a910
+ .dword 0xf40e35855771202a, 0x106aa07032bbd1b8
+ .dword 0x19a4c116b8d2d0c8, 0x1e376c085141ab53
+ .dword 0x2748774cdf8eeb99, 0x34b0bcb5e19b48a8
+ .dword 0x391c0cb3c5c95a63, 0x4ed8aa4ae3418acb
+ .dword 0x5b9cca4f7763e373, 0x682e6ff3d6b2b8a3
+ .dword 0x748f82ee5defb2fc, 0x78a5636f43172f60
+ .dword 0x84c87814a1f0ab72, 0x8cc702081a6439ec
+ .dword 0x90befffa23631e28, 0xa4506cebde82bde9
+ .dword 0xbef9a3f7b2c67915, 0xc67178f2e372532b
+ .dword 0xca273eceea26619c, 0xd186b8c721c0c207
+ .dword 0xeada7dd6cde0eb1e, 0xf57d4f7fee6ed178
+ .dword 0x06f067aa72176fba, 0x0a637dc5a2c898a6
+ .dword 0x113f9804bef90dae, 0x1b710b35131c471b
+ .dword 0x28db77f523047d84, 0x32caab7b40c72493
+ .dword 0x3c9ebe0a15c9bebc, 0x431d67c49c100d4c
+ .dword 0x4cc5d4becb3e42b6, 0x597f299cfc657e2a
+ .dword 0x5fcb6fab3ad6faec, 0x6c44198c4a475817
+.size $K512,.-$K512
+___
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
--
2.28.0
Add SM4 implementation using Zvksed vector crypto extension from OpenSSL
(openssl/openssl#21923).
The perlasm here is different from the original implementation in OpenSSL.
In OpenSSL, SM4 has the separated set_encrypt_key and set_decrypt_key
functions. In kernel, these set_key functions are merged into a single
one in order to skip the redundant key expanding instructions.
Co-developed-by: Christoph Müllner <[email protected]>
Signed-off-by: Christoph Müllner <[email protected]>
Co-developed-by: Heiko Stuebner <[email protected]>
Signed-off-by: Heiko Stuebner <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
Changelog v4:
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
Changelog v3:
- Use asm mnemonics for the instructions in RVV 1.0 extension.
Changelog v2:
- Do not turn on kconfig `SM4_RISCV64` option by default.
- Add the missed `static` declaration for riscv64_sm4_zvksed_alg.
- Add `asmlinkage` qualifier for crypto asm function.
- Rename sm4-riscv64-zvkb-zvksed to sm4-riscv64-zvksed-zvkb.
- Reorder structure riscv64_sm4_zvksed_zvkb_alg members initialization
in the order declared.
---
arch/riscv/crypto/Kconfig | 17 ++
arch/riscv/crypto/Makefile | 7 +
arch/riscv/crypto/sm4-riscv64-glue.c | 121 +++++++++++
arch/riscv/crypto/sm4-riscv64-zvksed.pl | 268 ++++++++++++++++++++++++
4 files changed, 413 insertions(+)
create mode 100644 arch/riscv/crypto/sm4-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sm4-riscv64-zvksed.pl
diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
index 1604782c0eed..cdf7fead0636 100644
--- a/arch/riscv/crypto/Kconfig
+++ b/arch/riscv/crypto/Kconfig
@@ -66,4 +66,21 @@ config CRYPTO_SHA512_RISCV64
- Zvknhb vector crypto extension
- Zvkb vector crypto extension
+config CRYPTO_SM4_RISCV64
+ tristate "Ciphers: SM4 (ShangMi 4)"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_ALGAPI
+ select CRYPTO_SM4
+ help
+ SM4 cipher algorithms (OSCCA GB/T 32907-2016,
+ ISO/IEC 18033-3:2010/Amd 1:2021)
+
+ SM4 (GBT.32907-2016) is a cryptographic standard issued by the
+ Organization of State Commercial Administration of China (OSCCA)
+ as an authorized cryptographic algorithms for the use within China.
+
+ Architecture: riscv64 using:
+ - Zvksed vector crypto extension
+ - Zvkb vector crypto extension
+
endmenu
diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile
index 8aabef950ad3..8e34861bba34 100644
--- a/arch/riscv/crypto/Makefile
+++ b/arch/riscv/crypto/Makefile
@@ -18,6 +18,9 @@ sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o
obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o
sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o
+obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o
+sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed.o
+
quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) void $(@)
@@ -39,9 +42,13 @@ $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_z
$(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl
$(call cmd,perlasm)
+$(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl
+ $(call cmd,perlasm)
+
clean-files += aes-riscv64-zvkned.S
clean-files += aes-riscv64-zvkned-zvbb-zvkg.S
clean-files += aes-riscv64-zvkned-zvkb.S
clean-files += ghash-riscv64-zvkg.S
clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S
clean-files += sha512-riscv64-zvknhb-zvkb.S
+clean-files += sm4-riscv64-zvksed.S
diff --git a/arch/riscv/crypto/sm4-riscv64-glue.c b/arch/riscv/crypto/sm4-riscv64-glue.c
new file mode 100644
index 000000000000..9d9d24b67ee3
--- /dev/null
+++ b/arch/riscv/crypto/sm4-riscv64-glue.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Linux/riscv64 port of the OpenSSL SM4 implementation for RISC-V 64
+ *
+ * Copyright (C) 2023 VRULL GmbH
+ * Author: Heiko Stuebner <[email protected]>
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Author: Jerry Shih <[email protected]>
+ */
+
+#include <asm/simd.h>
+#include <asm/vector.h>
+#include <crypto/sm4.h>
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/simd.h>
+#include <linux/crypto.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/linkage.h>
+#include <linux/module.h>
+#include <linux/types.h>
+
+/* sm4 using zvksed vector crypto extension */
+asmlinkage void rv64i_zvksed_sm4_encrypt(const u8 *in, u8 *out, const u32 *key);
+asmlinkage void rv64i_zvksed_sm4_decrypt(const u8 *in, u8 *out, const u32 *key);
+asmlinkage int rv64i_zvksed_sm4_set_key(const u8 *user_key,
+ unsigned int key_len, u32 *enc_key,
+ u32 *dec_key);
+
+static int riscv64_sm4_setkey_zvksed(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ struct sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+ int ret = 0;
+
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ if (rv64i_zvksed_sm4_set_key(key, key_len, ctx->rkey_enc,
+ ctx->rkey_dec))
+ ret = -EINVAL;
+ kernel_vector_end();
+ } else {
+ ret = sm4_expandkey(ctx, key, key_len);
+ }
+
+ return ret;
+}
+
+static void riscv64_sm4_encrypt_zvksed(struct crypto_tfm *tfm, u8 *dst,
+ const u8 *src)
+{
+ const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ rv64i_zvksed_sm4_encrypt(src, dst, ctx->rkey_enc);
+ kernel_vector_end();
+ } else {
+ sm4_crypt_block(ctx->rkey_enc, dst, src);
+ }
+}
+
+static void riscv64_sm4_decrypt_zvksed(struct crypto_tfm *tfm, u8 *dst,
+ const u8 *src)
+{
+ const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ rv64i_zvksed_sm4_decrypt(src, dst, ctx->rkey_dec);
+ kernel_vector_end();
+ } else {
+ sm4_crypt_block(ctx->rkey_dec, dst, src);
+ }
+}
+
+static struct crypto_alg riscv64_sm4_zvksed_zvkb_alg = {
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = SM4_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sm4_ctx),
+ .cra_priority = 300,
+ .cra_name = "sm4",
+ .cra_driver_name = "sm4-riscv64-zvksed-zvkb",
+ .cra_cipher = {
+ .cia_min_keysize = SM4_KEY_SIZE,
+ .cia_max_keysize = SM4_KEY_SIZE,
+ .cia_setkey = riscv64_sm4_setkey_zvksed,
+ .cia_encrypt = riscv64_sm4_encrypt_zvksed,
+ .cia_decrypt = riscv64_sm4_decrypt_zvksed,
+ },
+ .cra_module = THIS_MODULE,
+};
+
+static inline bool check_sm4_ext(void)
+{
+ return riscv_isa_extension_available(NULL, ZVKSED) &&
+ riscv_isa_extension_available(NULL, ZVKB) &&
+ riscv_vector_vlen() >= 128;
+}
+
+static int __init riscv64_sm4_mod_init(void)
+{
+ if (check_sm4_ext())
+ return crypto_register_alg(&riscv64_sm4_zvksed_zvkb_alg);
+
+ return -ENODEV;
+}
+
+static void __exit riscv64_sm4_mod_fini(void)
+{
+ crypto_unregister_alg(&riscv64_sm4_zvksed_zvkb_alg);
+}
+
+module_init(riscv64_sm4_mod_init);
+module_exit(riscv64_sm4_mod_fini);
+
+MODULE_DESCRIPTION("SM4 (RISC-V accelerated)");
+MODULE_AUTHOR("Heiko Stuebner <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("sm4");
diff --git a/arch/riscv/crypto/sm4-riscv64-zvksed.pl b/arch/riscv/crypto/sm4-riscv64-zvksed.pl
new file mode 100644
index 000000000000..1873160aac2f
--- /dev/null
+++ b/arch/riscv/crypto/sm4-riscv64-zvksed.pl
@@ -0,0 +1,268 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Christoph Müllner <[email protected]>
+# Copyright (c) 2023, Jerry Shih <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# The generated code of this file depends on the following RISC-V extensions:
+# - RV64I
+# - RISC-V Vector ('V') with VLEN >= 128
+# - RISC-V Vector SM4 Block Cipher extension ('Zvksed')
+# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+.text
+.option arch, +zvksed, +zvkb
+___
+
+####
+# int rv64i_zvksed_sm4_set_key(const u8 *user_key, unsigned int key_len,
+# u32 *enc_key, u32 *dec_key);
+#
+{
+my ($ukey,$key_len,$enc_key,$dec_key)=("a0","a1","a2","a3");
+my ($fk,$stride)=("a4","a5");
+my ($t0,$t1)=("t0","t1");
+my ($vukey,$vfk,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10");
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvksed_sm4_set_key
+.type rv64i_zvksed_sm4_set_key,\@function
+rv64i_zvksed_sm4_set_key:
+ li $t0, 16
+ beq $t0, $key_len, 1f
+ li a0, 1
+ ret
+1:
+
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ # Load the user key
+ vle32.v $vukey, ($ukey)
+ vrev8.v $vukey, $vukey
+
+ # Load the FK.
+ la $fk, FK
+ vle32.v $vfk, ($fk)
+
+ # Generate round keys.
+ vxor.vv $vukey, $vukey, $vfk
+ vsm4k.vi $vk0, $vukey, 0 # rk[0:3]
+ vsm4k.vi $vk1, $vk0, 1 # rk[4:7]
+ vsm4k.vi $vk2, $vk1, 2 # rk[8:11]
+ vsm4k.vi $vk3, $vk2, 3 # rk[12:15]
+ vsm4k.vi $vk4, $vk3, 4 # rk[16:19]
+ vsm4k.vi $vk5, $vk4, 5 # rk[20:23]
+ vsm4k.vi $vk6, $vk5, 6 # rk[24:27]
+ vsm4k.vi $vk7, $vk6, 7 # rk[28:31]
+
+ # Store enc round keys
+ vse32.v $vk0, ($enc_key) # rk[0:3]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk1, ($enc_key) # rk[4:7]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk2, ($enc_key) # rk[8:11]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk3, ($enc_key) # rk[12:15]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk4, ($enc_key) # rk[16:19]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk5, ($enc_key) # rk[20:23]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk6, ($enc_key) # rk[24:27]
+ addi $enc_key, $enc_key, 16
+ vse32.v $vk7, ($enc_key) # rk[28:31]
+
+ # Store dec round keys in reverse order
+ addi $dec_key, $dec_key, 12
+ li $stride, -4
+ vsse32.v $vk7, ($dec_key), $stride # rk[31:28]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk6, ($dec_key), $stride # rk[27:24]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk5, ($dec_key), $stride # rk[23:20]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk4, ($dec_key), $stride # rk[19:16]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk3, ($dec_key), $stride # rk[15:12]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk2, ($dec_key), $stride # rk[11:8]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk1, ($dec_key), $stride # rk[7:4]
+ addi $dec_key, $dec_key, 16
+ vsse32.v $vk0, ($dec_key), $stride # rk[3:0]
+
+ li a0, 0
+ ret
+.size rv64i_zvksed_sm4_set_key,.-rv64i_zvksed_sm4_set_key
+___
+}
+
+####
+# void rv64i_zvksed_sm4_encrypt(const unsigned char *in, unsigned char *out,
+# const SM4_KEY *key);
+#
+{
+my ($in,$out,$keys,$stride)=("a0","a1","a2","t0");
+my ($vdata,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7,$vgen)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10");
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvksed_sm4_encrypt
+.type rv64i_zvksed_sm4_encrypt,\@function
+rv64i_zvksed_sm4_encrypt:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ # Load input data
+ vle32.v $vdata, ($in)
+ vrev8.v $vdata, $vdata
+
+ # Order of elements was adjusted in sm4_set_key()
+ # Encrypt with all keys
+ vle32.v $vk0, ($keys) # rk[0:3]
+ vsm4r.vs $vdata, $vk0
+ addi $keys, $keys, 16
+ vle32.v $vk1, ($keys) # rk[4:7]
+ vsm4r.vs $vdata, $vk1
+ addi $keys, $keys, 16
+ vle32.v $vk2, ($keys) # rk[8:11]
+ vsm4r.vs $vdata, $vk2
+ addi $keys, $keys, 16
+ vle32.v $vk3, ($keys) # rk[12:15]
+ vsm4r.vs $vdata, $vk3
+ addi $keys, $keys, 16
+ vle32.v $vk4, ($keys) # rk[16:19]
+ vsm4r.vs $vdata, $vk4
+ addi $keys, $keys, 16
+ vle32.v $vk5, ($keys) # rk[20:23]
+ vsm4r.vs $vdata, $vk5
+ addi $keys, $keys, 16
+ vle32.v $vk6, ($keys) # rk[24:27]
+ vsm4r.vs $vdata, $vk6
+ addi $keys, $keys, 16
+ vle32.v $vk7, ($keys) # rk[28:31]
+ vsm4r.vs $vdata, $vk7
+
+ # Save the ciphertext (in reverse element order)
+ vrev8.v $vdata, $vdata
+ li $stride, -4
+ addi $out, $out, 12
+ vsse32.v $vdata, ($out), $stride
+
+ ret
+.size rv64i_zvksed_sm4_encrypt,.-rv64i_zvksed_sm4_encrypt
+___
+}
+
+####
+# void rv64i_zvksed_sm4_decrypt(const unsigned char *in, unsigned char *out,
+# const SM4_KEY *key);
+#
+{
+my ($in,$out,$keys,$stride)=("a0","a1","a2","t0");
+my ($vdata,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7,$vgen)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10");
+$code .= <<___;
+.p2align 3
+.globl rv64i_zvksed_sm4_decrypt
+.type rv64i_zvksed_sm4_decrypt,\@function
+rv64i_zvksed_sm4_decrypt:
+ vsetivli zero, 4, e32, m1, ta, ma
+
+ # Load input data
+ vle32.v $vdata, ($in)
+ vrev8.v $vdata, $vdata
+
+ # Order of key elements was adjusted in sm4_set_key()
+ # Decrypt with all keys
+ vle32.v $vk7, ($keys) # rk[31:28]
+ vsm4r.vs $vdata, $vk7
+ addi $keys, $keys, 16
+ vle32.v $vk6, ($keys) # rk[27:24]
+ vsm4r.vs $vdata, $vk6
+ addi $keys, $keys, 16
+ vle32.v $vk5, ($keys) # rk[23:20]
+ vsm4r.vs $vdata, $vk5
+ addi $keys, $keys, 16
+ vle32.v $vk4, ($keys) # rk[19:16]
+ vsm4r.vs $vdata, $vk4
+ addi $keys, $keys, 16
+ vle32.v $vk3, ($keys) # rk[15:11]
+ vsm4r.vs $vdata, $vk3
+ addi $keys, $keys, 16
+ vle32.v $vk2, ($keys) # rk[11:8]
+ vsm4r.vs $vdata, $vk2
+ addi $keys, $keys, 16
+ vle32.v $vk1, ($keys) # rk[7:4]
+ vsm4r.vs $vdata, $vk1
+ addi $keys, $keys, 16
+ vle32.v $vk0, ($keys) # rk[3:0]
+ vsm4r.vs $vdata, $vk0
+
+ # Save the ciphertext (in reverse element order)
+ vrev8.v $vdata, $vdata
+ li $stride, -4
+ addi $out, $out, 12
+ vsse32.v $vdata, ($out), $stride
+
+ ret
+.size rv64i_zvksed_sm4_decrypt,.-rv64i_zvksed_sm4_decrypt
+___
+}
+
+$code .= <<___;
+# Family Key (little-endian 32-bit chunks)
+.p2align 3
+FK:
+ .word 0xA3B1BAC6, 0x56AA3350, 0x677D9197, 0xB27022DC
+.size FK,.-FK
+___
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
--
2.28.0
Add SM3 implementation using Zvksh vector crypto extension from OpenSSL
(openssl/openssl#21923).
Co-developed-by: Christoph Müllner <[email protected]>
Signed-off-by: Christoph Müllner <[email protected]>
Co-developed-by: Heiko Stuebner <[email protected]>
Signed-off-by: Heiko Stuebner <[email protected]>
Signed-off-by: Jerry Shih <[email protected]>
---
Changelog v4:
- Use asm mnemonics for the instructions in vector crypto 1.0 extension.
Changelog v3:
- Use `SYM_TYPED_FUNC_START` for sm3 indirect-call asm symbol.
- Use asm mnemonics for the instructions in RVV 1.0 extension.
Changelog v2:
- Do not turn on kconfig `SM3_RISCV64` option by default.
- Add `asmlinkage` qualifier for crypto asm function.
- Rename sm3-riscv64-zvkb-zvksh to sm3-riscv64-zvksh-zvkb.
- Reorder structure sm3_alg members initialization in the order declared.
---
arch/riscv/crypto/Kconfig | 12 ++
arch/riscv/crypto/Makefile | 7 +
arch/riscv/crypto/sm3-riscv64-glue.c | 124 ++++++++++++++
arch/riscv/crypto/sm3-riscv64-zvksh.pl | 227 +++++++++++++++++++++++++
4 files changed, 370 insertions(+)
create mode 100644 arch/riscv/crypto/sm3-riscv64-glue.c
create mode 100644 arch/riscv/crypto/sm3-riscv64-zvksh.pl
diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
index cdf7fead0636..81dcae72c477 100644
--- a/arch/riscv/crypto/Kconfig
+++ b/arch/riscv/crypto/Kconfig
@@ -66,6 +66,18 @@ config CRYPTO_SHA512_RISCV64
- Zvknhb vector crypto extension
- Zvkb vector crypto extension
+config CRYPTO_SM3_RISCV64
+ tristate "Hash functions: SM3 (ShangMi 3)"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_HASH
+ select CRYPTO_SM3
+ help
+ SM3 (ShangMi 3) secure hash function (OSCCA GM/T 0004-2012)
+
+ Architecture: riscv64 using:
+ - Zvksh vector crypto extension
+ - Zvkb vector crypto extension
+
config CRYPTO_SM4_RISCV64
tristate "Ciphers: SM4 (ShangMi 4)"
depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile
index 8e34861bba34..b1f857695c1c 100644
--- a/arch/riscv/crypto/Makefile
+++ b/arch/riscv/crypto/Makefile
@@ -18,6 +18,9 @@ sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o
obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o
sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o
+obj-$(CONFIG_CRYPTO_SM3_RISCV64) += sm3-riscv64.o
+sm3-riscv64-y := sm3-riscv64-glue.o sm3-riscv64-zvksh.o
+
obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o
sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed.o
@@ -42,6 +45,9 @@ $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_z
$(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl
$(call cmd,perlasm)
+$(obj)/sm3-riscv64-zvksh.S: $(src)/sm3-riscv64-zvksh.pl
+ $(call cmd,perlasm)
+
$(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl
$(call cmd,perlasm)
@@ -51,4 +57,5 @@ clean-files += aes-riscv64-zvkned-zvkb.S
clean-files += ghash-riscv64-zvkg.S
clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S
clean-files += sha512-riscv64-zvknhb-zvkb.S
+clean-files += sm3-riscv64-zvksh.S
clean-files += sm4-riscv64-zvksed.S
diff --git a/arch/riscv/crypto/sm3-riscv64-glue.c b/arch/riscv/crypto/sm3-riscv64-glue.c
new file mode 100644
index 000000000000..0e5a2b84c930
--- /dev/null
+++ b/arch/riscv/crypto/sm3-riscv64-glue.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Linux/riscv64 port of the OpenSSL SM3 implementation for RISC-V 64
+ *
+ * Copyright (C) 2023 VRULL GmbH
+ * Author: Heiko Stuebner <[email protected]>
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Author: Jerry Shih <[email protected]>
+ */
+
+#include <asm/simd.h>
+#include <asm/vector.h>
+#include <linux/linkage.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/sm3_base.h>
+
+/*
+ * sm3 using zvksh vector crypto extension
+ *
+ * This asm function will just take the first 256-bit as the sm3 state from
+ * the pointer to `struct sm3_state`.
+ */
+asmlinkage void ossl_hwsm3_block_data_order_zvksh(struct sm3_state *digest,
+ u8 const *o, int num);
+
+static int riscv64_sm3_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
+{
+ int ret = 0;
+
+ /*
+ * Make sure struct sm3_state begins directly with the SM3 256-bit internal
+ * state, as this is what the asm function expect.
+ */
+ BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0);
+
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ ret = sm3_base_do_update(desc, data, len,
+ ossl_hwsm3_block_data_order_zvksh);
+ kernel_vector_end();
+ } else {
+ sm3_update(shash_desc_ctx(desc), data, len);
+ }
+
+ return ret;
+}
+
+static int riscv64_sm3_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ struct sm3_state *ctx;
+
+ if (crypto_simd_usable()) {
+ kernel_vector_begin();
+ if (len)
+ sm3_base_do_update(desc, data, len,
+ ossl_hwsm3_block_data_order_zvksh);
+ sm3_base_do_finalize(desc, ossl_hwsm3_block_data_order_zvksh);
+ kernel_vector_end();
+
+ return sm3_base_finish(desc, out);
+ }
+
+ ctx = shash_desc_ctx(desc);
+ if (len)
+ sm3_update(ctx, data, len);
+ sm3_final(ctx, out);
+
+ return 0;
+}
+
+static int riscv64_sm3_final(struct shash_desc *desc, u8 *out)
+{
+ return riscv64_sm3_finup(desc, NULL, 0, out);
+}
+
+static struct shash_alg sm3_alg = {
+ .init = sm3_base_init,
+ .update = riscv64_sm3_update,
+ .final = riscv64_sm3_final,
+ .finup = riscv64_sm3_finup,
+ .descsize = sizeof(struct sm3_state),
+ .digestsize = SM3_DIGEST_SIZE,
+ .base = {
+ .cra_blocksize = SM3_BLOCK_SIZE,
+ .cra_priority = 150,
+ .cra_name = "sm3",
+ .cra_driver_name = "sm3-riscv64-zvksh-zvkb",
+ .cra_module = THIS_MODULE,
+ },
+};
+
+static inline bool check_sm3_ext(void)
+{
+ return riscv_isa_extension_available(NULL, ZVKSH) &&
+ riscv_isa_extension_available(NULL, ZVKB) &&
+ riscv_vector_vlen() >= 128;
+}
+
+static int __init riscv64_sm3_mod_init(void)
+{
+ if (check_sm3_ext())
+ return crypto_register_shash(&sm3_alg);
+
+ return -ENODEV;
+}
+
+static void __exit riscv64_sm3_mod_fini(void)
+{
+ crypto_unregister_shash(&sm3_alg);
+}
+
+module_init(riscv64_sm3_mod_init);
+module_exit(riscv64_sm3_mod_fini);
+
+MODULE_DESCRIPTION("SM3 (RISC-V accelerated)");
+MODULE_AUTHOR("Heiko Stuebner <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("sm3");
diff --git a/arch/riscv/crypto/sm3-riscv64-zvksh.pl b/arch/riscv/crypto/sm3-riscv64-zvksh.pl
new file mode 100644
index 000000000000..c94c99111a71
--- /dev/null
+++ b/arch/riscv/crypto/sm3-riscv64-zvksh.pl
@@ -0,0 +1,227 @@
+#! /usr/bin/env perl
+# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause
+#
+# This file is dual-licensed, meaning that you can use it under your
+# choice of either of the following two licenses:
+#
+# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved.
+#
+# Licensed under the Apache License 2.0 (the "License"). You can obtain
+# a copy in the file LICENSE in the source distribution or at
+# https://www.openssl.org/source/license.html
+#
+# or
+#
+# Copyright (c) 2023, Christoph Müllner <[email protected]>
+# Copyright (c) 2023, Jerry Shih <[email protected]>
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# The generated code of this file depends on the following RISC-V extensions:
+# - RV64I
+# - RISC-V Vector ('V') with VLEN >= 128
+# - RISC-V Vector SM3 Secure Hash extension ('Zvksh')
+# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb')
+
+use strict;
+use warnings;
+
+use FindBin qw($Bin);
+use lib "$Bin";
+use lib "$Bin/../../perlasm";
+
+# $output is the last argument if it looks like a file (it has an extension)
+# $flavour is the first argument if it doesn't look like a file
+my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef;
+my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef;
+
+$output and open STDOUT,">$output";
+
+my $code=<<___;
+#include <linux/cfi_types.h>
+
+.text
+.option arch, +zvksh, +zvkb
+___
+
+################################################################################
+# ossl_hwsm3_block_data_order_zvksh(SM3_CTX *c, const void *p, size_t num);
+{
+my ($CTX, $INPUT, $NUM) = ("a0", "a1", "a2");
+my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7,
+ $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15,
+ $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23,
+ $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31,
+) = map("v$_",(0..31));
+
+$code .= <<___;
+SYM_TYPED_FUNC_START(ossl_hwsm3_block_data_order_zvksh)
+ vsetivli zero, 8, e32, m2, ta, ma
+
+ # Load initial state of hash context (c->A-H).
+ vle32.v $V0, ($CTX)
+ vrev8.v $V0, $V0
+
+L_sm3_loop:
+ # Copy the previous state to v2.
+ # It will be XOR'ed with the current state at the end of the round.
+ vmv.v.v $V2, $V0
+
+ # Load the 64B block in 2x32B chunks.
+ vle32.v $V6, ($INPUT) # v6 := {w7, ..., w0}
+ addi $INPUT, $INPUT, 32
+
+ vle32.v $V8, ($INPUT) # v8 := {w15, ..., w8}
+ addi $INPUT, $INPUT, 32
+
+ addi $NUM, $NUM, -1
+
+ # As vsm3c consumes only w0, w1, w4, w5 we need to slide the input
+ # 2 elements down so we process elements w2, w3, w6, w7
+ # This will be repeated for each odd round.
+ vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w7, ..., w2}
+
+ vsm3c.vi $V0, $V6, 0
+ vsm3c.vi $V0, $V4, 1
+
+ # Prepare a vector with {w11, ..., w4}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w7, ..., w4}
+ vslideup.vi $V4, $V8, 4 # v4 := {w11, w10, w9, w8, w7, w6, w5, w4}
+
+ vsm3c.vi $V0, $V4, 2
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w11, w10, w9, w8, w7, w6}
+ vsm3c.vi $V0, $V4, 3
+
+ vsm3c.vi $V0, $V8, 4
+ vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w15, w14, w13, w12, w11, w10}
+ vsm3c.vi $V0, $V4, 5
+
+ vsm3me.vv $V6, $V8, $V6 # v6 := {w23, w22, w21, w20, w19, w18, w17, w16}
+
+ # Prepare a register with {w19, w18, w17, w16, w15, w14, w13, w12}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w15, w14, w13, w12}
+ vslideup.vi $V4, $V6, 4 # v4 := {w19, w18, w17, w16, w15, w14, w13, w12}
+
+ vsm3c.vi $V0, $V4, 6
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w19, w18, w17, w16, w15, w14}
+ vsm3c.vi $V0, $V4, 7
+
+ vsm3c.vi $V0, $V6, 8
+ vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w23, w22, w21, w20, w19, w18}
+ vsm3c.vi $V0, $V4, 9
+
+ vsm3me.vv $V8, $V6, $V8 # v8 := {w31, w30, w29, w28, w27, w26, w25, w24}
+
+ # Prepare a register with {w27, w26, w25, w24, w23, w22, w21, w20}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w23, w22, w21, w20}
+ vslideup.vi $V4, $V8, 4 # v4 := {w27, w26, w25, w24, w23, w22, w21, w20}
+
+ vsm3c.vi $V0, $V4, 10
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w27, w26, w25, w24, w23, w22}
+ vsm3c.vi $V0, $V4, 11
+
+ vsm3c.vi $V0, $V8, 12
+ vslidedown.vi $V4, $V8, 2 # v4 := {x, X, w31, w30, w29, w28, w27, w26}
+ vsm3c.vi $V0, $V4, 13
+
+ vsm3me.vv $V6, $V8, $V6 # v6 := {w32, w33, w34, w35, w36, w37, w38, w39}
+
+ # Prepare a register with {w35, w34, w33, w32, w31, w30, w29, w28}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w31, w30, w29, w28}
+ vslideup.vi $V4, $V6, 4 # v4 := {w35, w34, w33, w32, w31, w30, w29, w28}
+
+ vsm3c.vi $V0, $V4, 14
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w35, w34, w33, w32, w31, w30}
+ vsm3c.vi $V0, $V4, 15
+
+ vsm3c.vi $V0, $V6, 16
+ vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w39, w38, w37, w36, w35, w34}
+ vsm3c.vi $V0, $V4, 17
+
+ vsm3me.vv $V8, $V6, $V8 # v8 := {w47, w46, w45, w44, w43, w42, w41, w40}
+
+ # Prepare a register with {w43, w42, w41, w40, w39, w38, w37, w36}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w39, w38, w37, w36}
+ vslideup.vi $V4, $V8, 4 # v4 := {w43, w42, w41, w40, w39, w38, w37, w36}
+
+ vsm3c.vi $V0, $V4, 18
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w43, w42, w41, w40, w39, w38}
+ vsm3c.vi $V0, $V4, 19
+
+ vsm3c.vi $V0, $V8, 20
+ vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w47, w46, w45, w44, w43, w42}
+ vsm3c.vi $V0, $V4, 21
+
+ vsm3me.vv $V6, $V8, $V6 # v6 := {w55, w54, w53, w52, w51, w50, w49, w48}
+
+ # Prepare a register with {w51, w50, w49, w48, w47, w46, w45, w44}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w47, w46, w45, w44}
+ vslideup.vi $V4, $V6, 4 # v4 := {w51, w50, w49, w48, w47, w46, w45, w44}
+
+ vsm3c.vi $V0, $V4, 22
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w51, w50, w49, w48, w47, w46}
+ vsm3c.vi $V0, $V4, 23
+
+ vsm3c.vi $V0, $V6, 24
+ vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w55, w54, w53, w52, w51, w50}
+ vsm3c.vi $V0, $V4, 25
+
+ vsm3me.vv $V8, $V6, $V8 # v8 := {w63, w62, w61, w60, w59, w58, w57, w56}
+
+ # Prepare a register with {w59, w58, w57, w56, w55, w54, w53, w52}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w55, w54, w53, w52}
+ vslideup.vi $V4, $V8, 4 # v4 := {w59, w58, w57, w56, w55, w54, w53, w52}
+
+ vsm3c.vi $V0, $V4, 26
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w59, w58, w57, w56, w55, w54}
+ vsm3c.vi $V0, $V4, 27
+
+ vsm3c.vi $V0, $V8, 28
+ vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w63, w62, w61, w60, w59, w58}
+ vsm3c.vi $V0, $V4, 29
+
+ vsm3me.vv $V6, $V8, $V6 # v6 := {w71, w70, w69, w68, w67, w66, w65, w64}
+
+ # Prepare a register with {w67, w66, w65, w64, w63, w62, w61, w60}
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w63, w62, w61, w60}
+ vslideup.vi $V4, $V6, 4 # v4 := {w67, w66, w65, w64, w63, w62, w61, w60}
+
+ vsm3c.vi $V0, $V4, 30
+ vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w67, w66, w65, w64, w63, w62}
+ vsm3c.vi $V0, $V4, 31
+
+ # XOR in the previous state.
+ vxor.vv $V0, $V0, $V2
+
+ bnez $NUM, L_sm3_loop # Check if there are any more block to process
+L_sm3_end:
+ vrev8.v $V0, $V0
+ vse32.v $V0, ($CTX)
+ ret
+SYM_FUNC_END(ossl_hwsm3_block_data_order_zvksh)
+___
+}
+
+print $code;
+
+close STDOUT or die "error closing STDOUT: $!";
--
2.28.0