2022-05-21 06:59:12

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v9 0/9] crypto: HCTR2 support

HCTR2 is a length-preserving encryption mode that is efficient on
processors with instructions to accelerate AES and carryless
multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM
processors with the ARMv8 Crypto Extensions.

HCTR2 is specified in https://ia.cr/2021/1441 "Length-preserving encryption
with HCTR2" which shows that if AES is secure and HCTR2 is instantiated
with AES, then HCTR2 is secure. Reference code and test vectors are at
https://github.com/google/hctr2.

As a length-preserving encryption mode, HCTR2 is suitable for applications
such as storage encryption where ciphertext expansion is not possible, and
thus authenticated encryption cannot be used. Currently, such applications
usually use XTS, or in some cases Adiantum. XTS has the disadvantage that
it is a narrow-block mode: a bitflip will only change 16 bytes in the
resulting ciphertext or plaintext. This reveals more information to an
attacker than necessary.

HCTR2 is a wide-block mode, so it provides a stronger security property: a
bitflip will change the entire message. HCTR2 is somewhat similar to
Adiantum, which is also a wide-block mode. However, HCTR2 is designed to
take advantage of existing crypto instructions, while Adiantum targets
devices without such hardware support. Adiantum is also designed with
longer messages in mind, while HCTR2 is designed to be efficient even on
short messages.

The first intended use of this mode in the kernel is for the encryption of
filenames, where for efficiency reasons encryption must be fully
deterministic (only one ciphertext for each plaintext) and the existing CBC
solution leaks more information than necessary for filenames with common
prefixes.

HCTR2 uses two passes of an ε-almost-∆-universal hash function called
POLYVAL and one pass of a block cipher mode called XCTR. POLYVAL is a
polynomial hash designed for efficiency on modern processors and was
originally specified for use in AES-GCM-SIV (RFC 8452). XCTR mode is a
variant of CTR mode that is more efficient on little-endian machines.

This patchset adds HCTR2 to Linux's crypto API, including generic
implementations of XCTR and POLYVAL, hardware accelerated implementations
of XCTR and POLYVAL for both x86-64 and ARM64, a templated implementation
of HCTR2, and an fscrypt policy for using HCTR2 for filename encryption.

Changes in v9:
* Fix redefinition error

Changes in v8:
* Fix incorrect x86 POLYVAL comment
* Add additional comments to ARM64 XCTR/CTR implementation

Changes in v7:
* Added/modified some comments in ARM64 XCTR/CTR implementation
* Various small style fixes

Changes in v6:
* Split ARM64 XCTR/CTR refactoring into separate patch
* Allow simd POLYVAL implementations to be preempted
* Fix uninitialized bug in HCTR2
* Fix streamcipher name handling bug in HCTR2
* Various small style fixes

Changes in v5:
* Refactor HCTR2 tweak hashing
* Remove non-AVX x86-64 XCTR implementation
* Combine arm64 CTR and XCTR modes
* Comment and alias CTR and XCTR modes
* Move generic fallback code for simd POLYVAL into polyval-generic.c
* Various small style fixes

Changes in v4:
* Small style fixes in generic POLYVAL and XCTR
* Move HCTR2 hash exporting/importing to helper functions
* Rewrite montgomery reduction for x86-64 POLYVAL
* Rewrite partial block handling for x86-64 POLYVAL
* Optimize x86-64 POLYVAL loop handling
* Remove ahash wrapper from x86-64 POLYVAL
* Add simd-unavailable handling to x86-64 POLYVAL
* Rewrite montgomery reduction for ARM64 POLYVAL
* Rewrite partial block handling for ARM64 POLYVAL
* Optimize ARM64 POLYVAL loop handling
* Remove ahash wrapper from ARM64 POLYVAL
* Add simd-unavailable handling to ARM64 POLYVAL

Changes in v3:
* Improve testvec coverage for XCTR, POLYVAL and HCTR2
* Fix endianness bug in xctr.c
* Fix alignment issues in polyval-generic.c
* Optimize hctr2.c by exporting/importing hash states
* Fix blockcipher name derivation in hctr2.c
* Move x86-64 XCTR implementation into aes_ctrby8_avx-x86_64.S
* Reuse ARM64 CTR mode tail handling in ARM64 XCTR
* Fix x86-64 POLYVAL comments
* Fix x86-64 POLYVAL key_powers type to match asm
* Fix ARM64 POLYVAL comments
* Fix ARM64 POLYVAL key_powers type to match asm
* Add XTS + HCTR2 policy to fscrypt

Nathan Huckleberry (9):
crypto: xctr - Add XCTR support
crypto: polyval - Add POLYVAL support
crypto: hctr2 - Add HCTR2 support
crypto: x86/aesni-xctr: Add accelerated implementation of XCTR
crypto: arm64/aes-xctr: Add accelerated implementation of XCTR
crypto: arm64/aes-xctr: Improve readability of XCTR and CTR modes
crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of
POLYVAL
crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
fscrypt: Add HCTR2 support for filename encryption

Documentation/filesystems/fscrypt.rst | 22 +-
arch/arm64/crypto/Kconfig | 9 +-
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/aes-glue.c | 80 +-
arch/arm64/crypto/aes-modes.S | 349 +++--
arch/arm64/crypto/polyval-ce-core.S | 361 ++++++
arch/arm64/crypto/polyval-ce-glue.c | 191 +++
arch/x86/crypto/Makefile | 3 +
arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 232 ++--
arch/x86/crypto/aesni-intel_glue.c | 114 +-
arch/x86/crypto/polyval-clmulni_asm.S | 321 +++++
arch/x86/crypto/polyval-clmulni_glue.c | 203 +++
crypto/Kconfig | 39 +-
crypto/Makefile | 3 +
crypto/hctr2.c | 581 +++++++++
crypto/polyval-generic.c | 245 ++++
crypto/tcrypt.c | 10 +
crypto/testmgr.c | 20 +
crypto/testmgr.h | 1536 +++++++++++++++++++++++
crypto/xctr.c | 191 +++
fs/crypto/fscrypt_private.h | 2 +-
fs/crypto/keysetup.c | 7 +
fs/crypto/policy.c | 14 +-
include/crypto/polyval.h | 22 +
include/uapi/linux/fscrypt.h | 3 +-
25 files changed, 4362 insertions(+), 199 deletions(-)
create mode 100644 arch/arm64/crypto/polyval-ce-core.S
create mode 100644 arch/arm64/crypto/polyval-ce-glue.c
create mode 100644 arch/x86/crypto/polyval-clmulni_asm.S
create mode 100644 arch/x86/crypto/polyval-clmulni_glue.c
create mode 100644 crypto/hctr2.c
create mode 100644 crypto/polyval-generic.c
create mode 100644 crypto/xctr.c
create mode 100644 include/crypto/polyval.h

--
2.36.1.124.g0e6072fb45-goog



2022-05-23 05:31:07

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v9 8/9] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL

Add hardware accelerated version of POLYVAL for ARM64 CPUs with
Crypto Extensions support.

This implementation is accelerated using PMULL instructions to perform
the finite field computations. For added efficiency, 8 blocks of the
message are processed simultaneously by precomputing the first 8
powers of the key.

Karatsuba multiplication is used instead of Schoolbook multiplication
because it was found to be slightly faster on ARM64 CPUs. Montgomery
reduction must be used instead of Barrett reduction due to the
difference in modulus between POLYVAL's field and other finite fields.

More information on POLYVAL can be found in the HCTR2 paper:
"Length-preserving encryption with HCTR2":
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
---
arch/arm64/crypto/Kconfig | 5 +
arch/arm64/crypto/Makefile | 3 +
arch/arm64/crypto/polyval-ce-core.S | 361 ++++++++++++++++++++++++++++
arch/arm64/crypto/polyval-ce-glue.c | 191 +++++++++++++++
4 files changed, 560 insertions(+)
create mode 100644 arch/arm64/crypto/polyval-ce-core.S
create mode 100644 arch/arm64/crypto/polyval-ce-glue.c

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 897f9a4b5b67..06431d298a92 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -60,6 +60,11 @@ config CRYPTO_GHASH_ARM64_CE
select CRYPTO_GF128MUL
select CRYPTO_LIB_AES

+config CRYPTO_POLYVAL_ARM64_CE
+ tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)"
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_POLYVAL
+
config CRYPTO_CRCT10DIF_ARM64_CE
tristate "CRCT10DIF digest algorithm using PMULL instructions"
depends on KERNEL_MODE_NEON && CRC_T10DIF
diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
index 09a805cc32d7..53f9af962b86 100644
--- a/arch/arm64/crypto/Makefile
+++ b/arch/arm64/crypto/Makefile
@@ -26,6 +26,9 @@ sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o
obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o
ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o

+obj-$(CONFIG_CRYPTO_POLYVAL_ARM64_CE) += polyval-ce.o
+polyval-ce-y := polyval-ce-glue.o polyval-ce-core.o
+
obj-$(CONFIG_CRYPTO_CRCT10DIF_ARM64_CE) += crct10dif-ce.o
crct10dif-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o

diff --git a/arch/arm64/crypto/polyval-ce-core.S b/arch/arm64/crypto/polyval-ce-core.S
new file mode 100644
index 000000000000..b5326540d2e3
--- /dev/null
+++ b/arch/arm64/crypto/polyval-ce-core.S
@@ -0,0 +1,361 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Implementation of POLYVAL using ARMv8 Crypto Extensions.
+ *
+ * Copyright 2021 Google LLC
+ */
+/*
+ * This is an efficient implementation of POLYVAL using ARMv8 Crypto Extensions
+ * It works on 8 blocks at a time, by precomputing the first 8 keys powers h^8,
+ * ..., h^1 in the POLYVAL finite field. This precomputation allows us to split
+ * finite field multiplication into two steps.
+ *
+ * In the first step, we consider h^i, m_i as normal polynomials of degree less
+ * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication
+ * is simply polynomial multiplication.
+ *
+ * In the second step, we compute the reduction of p(x) modulo the finite field
+ * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where
+ * multiplication is finite field multiplication. The advantage is that the
+ * two-step process only requires 1 finite field reduction for every 8
+ * polynomial multiplications. Further parallelism is gained by interleaving the
+ * multiplications and polynomial reductions.
+ */
+
+#include <linux/linkage.h>
+#define STRIDE_BLOCKS 8
+
+KEY_POWERS .req x0
+MSG .req x1
+BLOCKS_LEFT .req x2
+ACCUMULATOR .req x3
+KEY_START .req x10
+EXTRA_BYTES .req x11
+TMP .req x13
+
+M0 .req v0
+M1 .req v1
+M2 .req v2
+M3 .req v3
+M4 .req v4
+M5 .req v5
+M6 .req v6
+M7 .req v7
+KEY8 .req v8
+KEY7 .req v9
+KEY6 .req v10
+KEY5 .req v11
+KEY4 .req v12
+KEY3 .req v13
+KEY2 .req v14
+KEY1 .req v15
+PL .req v16
+PH .req v17
+TMP_V .req v18
+LO .req v20
+MI .req v21
+HI .req v22
+SUM .req v23
+GSTAR .req v24
+
+ .text
+
+ .arch armv8-a+crypto
+ .align 4
+
+.Lgstar:
+ .quad 0xc200000000000000, 0xc200000000000000
+
+/*
+ * Computes the product of two 128-bit polynomials in X and Y and XORs the
+ * components of the 256-bit product into LO, MI, HI.
+ *
+ * Given:
+ * X = [X_1 : X_0]
+ * Y = [Y_1 : Y_0]
+ *
+ * We compute:
+ * LO += X_0 * Y_0
+ * MI += (X_0 + X_1) * (Y_0 + Y_1)
+ * HI += X_1 * Y_1
+ *
+ * Later, the 256-bit result can be extracted as:
+ * [HI_1 : HI_0 + HI_1 + MI_1 + LO_1 : LO_1 + HI_0 + MI_0 + LO_0 : LO_0]
+ * This step is done when computing the polynomial reduction for efficiency
+ * reasons.
+ *
+ * Karatsuba multiplication is used instead of Schoolbook multiplication because
+ * it was found to be slightly faster on ARM64 CPUs.
+ *
+ */
+.macro karatsuba1 X Y
+ X .req \X
+ Y .req \Y
+ ext v25.16b, X.16b, X.16b, #8
+ ext v26.16b, Y.16b, Y.16b, #8
+ eor v25.16b, v25.16b, X.16b
+ eor v26.16b, v26.16b, Y.16b
+ pmull2 v28.1q, X.2d, Y.2d
+ pmull v29.1q, X.1d, Y.1d
+ pmull v27.1q, v25.1d, v26.1d
+ eor HI.16b, HI.16b, v28.16b
+ eor LO.16b, LO.16b, v29.16b
+ eor MI.16b, MI.16b, v27.16b
+ .unreq X
+ .unreq Y
+.endm
+
+/*
+ * Same as karatsuba1, except overwrites HI, LO, MI rather than XORing into
+ * them.
+ */
+.macro karatsuba1_store X Y
+ X .req \X
+ Y .req \Y
+ ext v25.16b, X.16b, X.16b, #8
+ ext v26.16b, Y.16b, Y.16b, #8
+ eor v25.16b, v25.16b, X.16b
+ eor v26.16b, v26.16b, Y.16b
+ pmull2 HI.1q, X.2d, Y.2d
+ pmull LO.1q, X.1d, Y.1d
+ pmull MI.1q, v25.1d, v26.1d
+ .unreq X
+ .unreq Y
+.endm
+
+/*
+ * Computes the 256-bit polynomial represented by LO, HI, MI. Stores
+ * the result in PL, PH.
+ * [PH : PL] =
+ * [HI_1 : HI_1 + HI_0 + MI_1 + LO_1 : HI_0 + MI_0 + LO_1 + LO_0 : LO_0]
+ */
+.macro karatsuba2
+ // v4 = [HI_1 + MI_1 : HI_0 + MI_0]
+ eor v4.16b, HI.16b, MI.16b
+ // v4 = [HI_1 + MI_1 + LO_1 : HI_0 + MI_0 + LO_0]
+ eor v4.16b, v4.16b, LO.16b
+ // v5 = [HI_0 : LO_1]
+ ext v5.16b, LO.16b, HI.16b, #8
+ // v4 = [HI_1 + HI_0 + MI_1 + LO_1 : HI_0 + MI_0 + LO_1 + LO_0]
+ eor v4.16b, v4.16b, v5.16b
+ // HI = [HI_0 : HI_1]
+ ext HI.16b, HI.16b, HI.16b, #8
+ // LO = [LO_0 : LO_1]
+ ext LO.16b, LO.16b, LO.16b, #8
+ // PH = [HI_1 : HI_1 + HI_0 + MI_1 + LO_1]
+ ext PH.16b, v4.16b, HI.16b, #8
+ // PL = [HI_0 + MI_0 + LO_1 + LO_0 : LO_0]
+ ext PL.16b, LO.16b, v4.16b, #8
+.endm
+
+/*
+ * Computes the 128-bit reduction of PH : PL. Stores the result in dest.
+ *
+ * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) =
+ * x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * We have a 256-bit polynomial PH : PL = P_3 : P_2 : P_1 : P_0 that is the
+ * product of two 128-bit polynomials in Montgomery form. We need to reduce it
+ * mod g(x). Also, since polynomials in Montgomery form have an "extra" factor
+ * of x^128, this product has two extra factors of x^128. To get it back into
+ * Montgomery form, we need to remove one of these factors by dividing by x^128.
+ *
+ * To accomplish both of these goals, we add multiples of g(x) that cancel out
+ * the low 128 bits P_1 : P_0, leaving just the high 128 bits. Since the low
+ * bits are zero, the polynomial division by x^128 can be done by right
+ * shifting.
+ *
+ * Since the only nonzero term in the low 64 bits of g(x) is the constant term,
+ * the multiple of g(x) needed to cancel out P_0 is P_0 * g(x). The CPU can
+ * only do 64x64 bit multiplications, so split P_0 * g(x) into x^128 * P_0 +
+ * x^64 * g*(x) * P_0 + P_0, where g*(x) is bits 64-127 of g(x). Adding this to
+ * the original polynomial gives P_3 : P_2 + P_0 + T_1 : P_1 + T_0 : 0, where T
+ * = T_1 : T_0 = g*(x) * P_0. Thus, bits 0-63 got "folded" into bits 64-191.
+ *
+ * Repeating this same process on the next 64 bits "folds" bits 64-127 into bits
+ * 128-255, giving the answer in bits 128-255. This time, we need to cancel P_1
+ * + T_0 in bits 64-127. The multiple of g(x) required is (P_1 + T_0) * g(x) *
+ * x^64. Adding this to our previous computation gives P_3 + P_1 + T_0 + V_1 :
+ * P_2 + P_0 + T_1 + V_0 : 0 : 0, where V = V_1 : V_0 = g*(x) * (P_1 + T_0).
+ *
+ * So our final computation is:
+ * T = T_1 : T_0 = g*(x) * P_0
+ * V = V_1 : V_0 = g*(x) * (P_1 + T_0)
+ * p(x) / x^{128} mod g(x) = P_3 + P_1 + T_0 + V_1 : P_2 + P_0 + T_1 + V_0
+ *
+ * The implementation below saves a XOR instruction by computing P_1 + T_0 : P_0
+ * + T_1 and XORing into dest, rather than separately XORing P_1 : P_0 and T_0 :
+ * T_1 into dest. This allows us to reuse P_1 + T_0 when computing V.
+ */
+.macro montgomery_reduction dest
+ DEST .req \dest
+ // TMP_V = T_1 : T_0 = P_0 * g*(x)
+ pmull TMP_V.1q, PL.1d, GSTAR.1d
+ // TMP_V = T_0 : T_1
+ ext TMP_V.16b, TMP_V.16b, TMP_V.16b, #8
+ // TMP_V = P_1 + T_0 : P_0 + T_1
+ eor TMP_V.16b, PL.16b, TMP_V.16b
+ // PH = P_3 + P_1 + T_0 : P_2 + P_0 + T_1
+ eor PH.16b, PH.16b, TMP_V.16b
+ // TMP_V = V_1 : V_0 = (P_1 + T_0) * g*(x)
+ pmull2 TMP_V.1q, TMP_V.2d, GSTAR.2d
+ eor DEST.16b, PH.16b, TMP_V.16b
+ .unreq DEST
+.endm
+
+/*
+ * Compute Polyval on 8 blocks.
+ *
+ * If reduce is set, also computes the montgomery reduction of the
+ * previous full_stride call and XORs with the first message block.
+ * (m_0 + REDUCE(PL, PH))h^8 + ... + m_7h^1.
+ * I.e., the first multiplication uses m_0 + REDUCE(PL, PH) instead of m_0.
+ *
+ * Sets PL, PH.
+ */
+.macro full_stride reduce
+ eor LO.16b, LO.16b, LO.16b
+ eor MI.16b, MI.16b, MI.16b
+ eor HI.16b, HI.16b, HI.16b
+
+ ld1 {M0.16b, M1.16b, M2.16b, M3.16b}, [MSG], #64
+ ld1 {M4.16b, M5.16b, M6.16b, M7.16b}, [MSG], #64
+
+ karatsuba1 M7 KEY1
+ .if \reduce
+ pmull TMP_V.1q, PL.1d, GSTAR.1d
+ .endif
+
+ karatsuba1 M6 KEY2
+ .if \reduce
+ ext TMP_V.16b, TMP_V.16b, TMP_V.16b, #8
+ .endif
+
+ karatsuba1 M5 KEY3
+ .if \reduce
+ eor TMP_V.16b, PL.16b, TMP_V.16b
+ .endif
+
+ karatsuba1 M4 KEY4
+ .if \reduce
+ eor PH.16b, PH.16b, TMP_V.16b
+ .endif
+
+ karatsuba1 M3 KEY5
+ .if \reduce
+ pmull2 TMP_V.1q, TMP_V.2d, GSTAR.2d
+ .endif
+
+ karatsuba1 M2 KEY6
+ .if \reduce
+ eor SUM.16b, PH.16b, TMP_V.16b
+ .endif
+
+ karatsuba1 M1 KEY7
+ eor M0.16b, M0.16b, SUM.16b
+
+ karatsuba1 M0 KEY8
+ karatsuba2
+.endm
+
+/*
+ * Handle any extra blocks after full_stride loop.
+ */
+.macro partial_stride
+ add KEY_POWERS, KEY_START, #(STRIDE_BLOCKS << 4)
+ sub KEY_POWERS, KEY_POWERS, BLOCKS_LEFT, lsl #4
+ ld1 {KEY1.16b}, [KEY_POWERS], #16
+
+ ld1 {TMP_V.16b}, [MSG], #16
+ eor SUM.16b, SUM.16b, TMP_V.16b
+ karatsuba1_store KEY1 SUM
+ sub BLOCKS_LEFT, BLOCKS_LEFT, #1
+
+ tst BLOCKS_LEFT, #4
+ beq .Lpartial4BlocksDone
+ ld1 {M0.16b, M1.16b, M2.16b, M3.16b}, [MSG], #64
+ ld1 {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [KEY_POWERS], #64
+ karatsuba1 M0 KEY8
+ karatsuba1 M1 KEY7
+ karatsuba1 M2 KEY6
+ karatsuba1 M3 KEY5
+.Lpartial4BlocksDone:
+ tst BLOCKS_LEFT, #2
+ beq .Lpartial2BlocksDone
+ ld1 {M0.16b, M1.16b}, [MSG], #32
+ ld1 {KEY8.16b, KEY7.16b}, [KEY_POWERS], #32
+ karatsuba1 M0 KEY8
+ karatsuba1 M1 KEY7
+.Lpartial2BlocksDone:
+ tst BLOCKS_LEFT, #1
+ beq .LpartialDone
+ ld1 {M0.16b}, [MSG], #16
+ ld1 {KEY8.16b}, [KEY_POWERS], #16
+ karatsuba1 M0 KEY8
+.LpartialDone:
+ karatsuba2
+ montgomery_reduction SUM
+.endm
+
+/*
+ * Perform montgomery multiplication in GF(2^128) and store result in op1.
+ *
+ * Computes op1*op2*x^{-128} mod x^128 + x^127 + x^126 + x^121 + 1
+ * If op1, op2 are in montgomery form, this computes the montgomery
+ * form of op1*op2.
+ *
+ * void pmull_polyval_mul(u8 *op1, const u8 *op2);
+ */
+SYM_FUNC_START(pmull_polyval_mul)
+ adr TMP, .Lgstar
+ ld1 {GSTAR.2d}, [TMP]
+ ld1 {v0.16b}, [x0]
+ ld1 {v1.16b}, [x1]
+ karatsuba1_store v0 v1
+ karatsuba2
+ montgomery_reduction SUM
+ st1 {SUM.16b}, [x0]
+ ret
+SYM_FUNC_END(pmull_polyval_mul)
+
+/*
+ * Perform polynomial evaluation as specified by POLYVAL. This computes:
+ * h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
+ * where n=nblocks, h is the hash key, and m_i are the message blocks.
+ *
+ * x0 - pointer to precomputed key powers h^8 ... h^1
+ * x1 - pointer to message blocks
+ * x2 - number of blocks to hash
+ * x3 - pointer to accumulator
+ *
+ * void pmull_polyval_update(const struct polyval_ctx *ctx, const u8 *in,
+ * size_t nblocks, u8 *accumulator);
+ */
+SYM_FUNC_START(pmull_polyval_update)
+ adr TMP, .Lgstar
+ mov KEY_START, KEY_POWERS
+ ld1 {GSTAR.2d}, [TMP]
+ ld1 {SUM.16b}, [ACCUMULATOR]
+ subs BLOCKS_LEFT, BLOCKS_LEFT, #STRIDE_BLOCKS
+ blt .LstrideLoopExit
+ ld1 {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [KEY_POWERS], #64
+ ld1 {KEY4.16b, KEY3.16b, KEY2.16b, KEY1.16b}, [KEY_POWERS], #64
+ full_stride 0
+ subs BLOCKS_LEFT, BLOCKS_LEFT, #STRIDE_BLOCKS
+ blt .LstrideLoopExitReduce
+.LstrideLoop:
+ full_stride 1
+ subs BLOCKS_LEFT, BLOCKS_LEFT, #STRIDE_BLOCKS
+ bge .LstrideLoop
+.LstrideLoopExitReduce:
+ montgomery_reduction SUM
+.LstrideLoopExit:
+ adds BLOCKS_LEFT, BLOCKS_LEFT, #STRIDE_BLOCKS
+ beq .LskipPartial
+ partial_stride
+.LskipPartial:
+ st1 {SUM.16b}, [ACCUMULATOR]
+ ret
+SYM_FUNC_END(pmull_polyval_update)
diff --git a/arch/arm64/crypto/polyval-ce-glue.c b/arch/arm64/crypto/polyval-ce-glue.c
new file mode 100644
index 000000000000..0a3b5718df85
--- /dev/null
+++ b/arch/arm64/crypto/polyval-ce-glue.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Glue code for POLYVAL using ARMv8 Crypto Extensions
+ *
+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <[email protected]>
+ * Copyright (c) 2009 Intel Corp.
+ * Author: Huang Ying <[email protected]>
+ * Copyright 2021 Google LLC
+ */
+
+/*
+ * Glue code based on ghash-clmulni-intel_glue.c.
+ *
+ * This implementation of POLYVAL uses montgomery multiplication accelerated by
+ * ARMv8 Crypto Extensions instructions to implement the finite field operations.
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/polyval.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/cpufeature.h>
+#include <asm/neon.h>
+#include <asm/simd.h>
+
+#define NUM_KEY_POWERS 8
+
+struct polyval_tfm_ctx {
+ /*
+ * These powers must be in the order h^8, ..., h^1.
+ */
+ u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
+};
+
+struct polyval_desc_ctx {
+ u8 buffer[POLYVAL_BLOCK_SIZE];
+ u32 bytes;
+};
+
+asmlinkage void pmull_polyval_update(const struct polyval_tfm_ctx *keys,
+ const u8 *in, size_t nblocks, u8 *accumulator);
+asmlinkage void pmull_polyval_mul(u8 *op1, const u8 *op2);
+
+static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
+ const u8 *in, size_t nblocks, u8 *accumulator)
+{
+ if (likely(crypto_simd_usable())) {
+ kernel_neon_begin();
+ pmull_polyval_update(keys, in, nblocks, accumulator);
+ kernel_neon_end();
+ } else {
+ polyval_update_non4k(keys->key_powers[NUM_KEY_POWERS-1], in,
+ nblocks, accumulator);
+ }
+}
+
+static void internal_polyval_mul(u8 *op1, const u8 *op2)
+{
+ if (likely(crypto_simd_usable())) {
+ kernel_neon_begin();
+ pmull_polyval_mul(op1, op2);
+ kernel_neon_end();
+ } else {
+ polyval_mul_non4k(op1, op2);
+ }
+}
+
+static int polyval_arm64_setkey(struct crypto_shash *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+ int i;
+
+ if (keylen != POLYVAL_BLOCK_SIZE)
+ return -EINVAL;
+
+ memcpy(tctx->key_powers[NUM_KEY_POWERS-1], key, POLYVAL_BLOCK_SIZE);
+
+ for (i = NUM_KEY_POWERS-2; i >= 0; i--) {
+ memcpy(tctx->key_powers[i], key, POLYVAL_BLOCK_SIZE);
+ internal_polyval_mul(tctx->key_powers[i],
+ tctx->key_powers[i+1]);
+ }
+
+ return 0;
+}
+
+static int polyval_arm64_init(struct shash_desc *desc)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+ memset(dctx, 0, sizeof(*dctx));
+
+ return 0;
+}
+
+static int polyval_arm64_update(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+ const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ u8 *pos;
+ unsigned int nblocks;
+ unsigned int n;
+
+ if (dctx->bytes) {
+ n = min(srclen, dctx->bytes);
+ pos = dctx->buffer + POLYVAL_BLOCK_SIZE - dctx->bytes;
+
+ dctx->bytes -= n;
+ srclen -= n;
+
+ while (n--)
+ *pos++ ^= *src++;
+
+ if (!dctx->bytes)
+ internal_polyval_mul(dctx->buffer,
+ tctx->key_powers[NUM_KEY_POWERS-1]);
+ }
+
+ while (srclen >= POLYVAL_BLOCK_SIZE) {
+ /* allow rescheduling every 4K bytes */
+ nblocks = min(srclen, 4096U) / POLYVAL_BLOCK_SIZE;
+ internal_polyval_update(tctx, src, nblocks, dctx->buffer);
+ srclen -= nblocks * POLYVAL_BLOCK_SIZE;
+ src += nblocks * POLYVAL_BLOCK_SIZE;
+ }
+
+ if (srclen) {
+ dctx->bytes = POLYVAL_BLOCK_SIZE - srclen;
+ pos = dctx->buffer;
+ while (srclen--)
+ *pos++ ^= *src++;
+ }
+
+ return 0;
+}
+
+static int polyval_arm64_final(struct shash_desc *desc, u8 *dst)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+ const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+
+ if (dctx->bytes) {
+ internal_polyval_mul(dctx->buffer,
+ tctx->key_powers[NUM_KEY_POWERS-1]);
+ }
+
+ memcpy(dst, dctx->buffer, POLYVAL_BLOCK_SIZE);
+
+ return 0;
+}
+
+static struct shash_alg polyval_alg = {
+ .digestsize = POLYVAL_DIGEST_SIZE,
+ .init = polyval_arm64_init,
+ .update = polyval_arm64_update,
+ .final = polyval_arm64_final,
+ .setkey = polyval_arm64_setkey,
+ .descsize = sizeof(struct polyval_desc_ctx),
+ .base = {
+ .cra_name = "polyval",
+ .cra_driver_name = "polyval-ce",
+ .cra_priority = 200,
+ .cra_blocksize = POLYVAL_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
+ .cra_module = THIS_MODULE,
+ },
+};
+
+static int __init polyval_ce_mod_init(void)
+{
+ return crypto_register_shash(&polyval_alg);
+}
+
+static void __exit polyval_ce_mod_exit(void)
+{
+ crypto_unregister_shash(&polyval_alg);
+}
+
+module_cpu_feature_match(PMULL, polyval_ce_mod_init)
+module_exit(polyval_ce_mod_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("POLYVAL hash function accelerated by ARMv8 Crypto Extensions");
+MODULE_ALIAS_CRYPTO("polyval");
+MODULE_ALIAS_CRYPTO("polyval-ce");
--
2.36.1.124.g0e6072fb45-goog


2022-05-23 06:50:10

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v9 3/9] crypto: hctr2 - Add HCTR2 support

Add support for HCTR2 as a template. HCTR2 is a length-preserving
encryption mode that is efficient on processors with instructions to
accelerate AES and carryless multiplication, e.g. x86 processors with
AES-NI and CLMUL, and ARM processors with the ARMv8 Crypto Extensions.

As a length-preserving encryption mode, HCTR2 is suitable for
applications such as storage encryption where ciphertext expansion is
not possible, and thus authenticated encryption cannot be used.
Currently, such applications usually use XTS, or in some cases Adiantum.
XTS has the disadvantage that it is a narrow-block mode: a bitflip will
only change 16 bytes in the resulting ciphertext or plaintext. This
reveals more information to an attacker than necessary.

HCTR2 is a wide-block mode, so it provides a stronger security property:
a bitflip will change the entire message. HCTR2 is somewhat similar to
Adiantum, which is also a wide-block mode. However, HCTR2 is designed
to take advantage of existing crypto instructions, while Adiantum
targets devices without such hardware support. Adiantum is also
designed with longer messages in mind, while HCTR2 is designed to be
efficient even on short messages.

HCTR2 requires POLYVAL and XCTR as components. More information on
HCTR2 can be found here: "Length-preserving encryption with HCTR2":
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
---
crypto/Kconfig | 11 +
crypto/Makefile | 1 +
crypto/hctr2.c | 581 ++++++++++++++++++++++++++++++++++++++++
crypto/tcrypt.c | 5 +
crypto/testmgr.c | 8 +
crypto/testmgr.h | 672 +++++++++++++++++++++++++++++++++++++++++++++++
6 files changed, 1278 insertions(+)
create mode 100644 crypto/hctr2.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 00139845d76d..0dedba74db4a 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -532,6 +532,17 @@ config CRYPTO_ADIANTUM

If unsure, say N.

+config CRYPTO_HCTR2
+ tristate "HCTR2 support"
+ select CRYPTO_XCTR
+ select CRYPTO_POLYVAL
+ select CRYPTO_MANAGER
+ help
+ HCTR2 is a length-preserving encryption mode for storage encryption that
+ is efficient on processors with instructions to accelerate AES and
+ carryless multiplication, e.g. x86 processors with AES-NI and CLMUL, and
+ ARM processors with the ARMv8 crypto extensions.
+
config CRYPTO_ESSIV
tristate "ESSIV support for block encryption"
select CRYPTO_AUTHENC
diff --git a/crypto/Makefile b/crypto/Makefile
index 561f901a91d4..2dca9dbdede6 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -94,6 +94,7 @@ obj-$(CONFIG_CRYPTO_LRW) += lrw.o
obj-$(CONFIG_CRYPTO_XTS) += xts.o
obj-$(CONFIG_CRYPTO_CTR) += ctr.o
obj-$(CONFIG_CRYPTO_XCTR) += xctr.o
+obj-$(CONFIG_CRYPTO_HCTR2) += hctr2.o
obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
obj-$(CONFIG_CRYPTO_ADIANTUM) += adiantum.o
obj-$(CONFIG_CRYPTO_NHPOLY1305) += nhpoly1305.o
diff --git a/crypto/hctr2.c b/crypto/hctr2.c
new file mode 100644
index 000000000000..7d00a3bcb667
--- /dev/null
+++ b/crypto/hctr2.c
@@ -0,0 +1,581 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HCTR2 length-preserving encryption mode
+ *
+ * Copyright 2021 Google LLC
+ */
+
+
+/*
+ * HCTR2 is a length-preserving encryption mode that is efficient on
+ * processors with instructions to accelerate AES and carryless
+ * multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM
+ * processors with the ARMv8 crypto extensions.
+ *
+ * For more details, see the paper: "Length-preserving encryption with HCTR2"
+ * (https://eprint.iacr.org/2021/1441.pdf)
+ */
+
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/polyval.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+
+#define BLOCKCIPHER_BLOCK_SIZE 16
+
+/*
+ * The specification allows variable-length tweaks, but Linux's crypto API
+ * currently only allows algorithms to support a single length. The "natural"
+ * tweak length for HCTR2 is 16, since that fits into one POLYVAL block for
+ * the best performance. But longer tweaks are useful for fscrypt, to avoid
+ * needing to derive per-file keys. So instead we use two blocks, or 32 bytes.
+ */
+#define TWEAK_SIZE 32
+
+struct hctr2_instance_ctx {
+ struct crypto_cipher_spawn blockcipher_spawn;
+ struct crypto_skcipher_spawn xctr_spawn;
+ struct crypto_shash_spawn polyval_spawn;
+};
+
+struct hctr2_tfm_ctx {
+ struct crypto_cipher *blockcipher;
+ struct crypto_skcipher *xctr;
+ struct crypto_shash *polyval;
+ u8 L[BLOCKCIPHER_BLOCK_SIZE];
+ int hashed_tweak_offset;
+ /*
+ * This struct is allocated with extra space for two exported hash
+ * states. Since the hash state size is not known at compile-time, we
+ * can't add these to the struct directly.
+ *
+ * hashed_tweaklen_divisible;
+ * hashed_tweaklen_remainder;
+ */
+};
+
+struct hctr2_request_ctx {
+ u8 first_block[BLOCKCIPHER_BLOCK_SIZE];
+ u8 xctr_iv[BLOCKCIPHER_BLOCK_SIZE];
+ struct scatterlist *bulk_part_dst;
+ struct scatterlist *bulk_part_src;
+ struct scatterlist sg_src[2];
+ struct scatterlist sg_dst[2];
+ /*
+ * Sub-request sizes are unknown at compile-time, so they need to go
+ * after the members with known sizes.
+ */
+ union {
+ struct shash_desc hash_desc;
+ struct skcipher_request xctr_req;
+ } u;
+ /*
+ * This struct is allocated with extra space for one exported hash
+ * state. Since the hash state size is not known at compile-time, we
+ * can't add it to the struct directly.
+ *
+ * hashed_tweak;
+ */
+};
+
+static inline u8 *hctr2_hashed_tweaklen(const struct hctr2_tfm_ctx *tctx,
+ bool has_remainder)
+{
+ u8 *p = (u8 *)tctx + sizeof(*tctx);
+
+ if (has_remainder) /* For messages not a multiple of block length */
+ p += crypto_shash_statesize(tctx->polyval);
+ return p;
+}
+
+static inline u8 *hctr2_hashed_tweak(const struct hctr2_tfm_ctx *tctx,
+ struct hctr2_request_ctx *rctx)
+{
+ return (u8 *)rctx + tctx->hashed_tweak_offset;
+}
+
+/*
+ * The input data for each HCTR2 hash step begins with a 16-byte block that
+ * contains the tweak length and a flag that indicates whether the input is evenly
+ * divisible into blocks. Since this implementation only supports one tweak
+ * length, we precompute the two hash states resulting from hashing the two
+ * possible values of this initial block. This reduces by one block the amount of
+ * data that needs to be hashed for each encryption/decryption
+ *
+ * These precomputed hashes are stored in hctr2_tfm_ctx.
+ */
+static int hctr2_hash_tweaklen(struct hctr2_tfm_ctx *tctx, bool has_remainder)
+{
+ SHASH_DESC_ON_STACK(shash, tfm->polyval);
+ __le64 tweak_length_block[2];
+ int err;
+
+ shash->tfm = tctx->polyval;
+ memset(tweak_length_block, 0, sizeof(tweak_length_block));
+
+ tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 2 + has_remainder);
+ err = crypto_shash_init(shash);
+ if (err)
+ return err;
+ err = crypto_shash_update(shash, (u8 *)tweak_length_block,
+ POLYVAL_BLOCK_SIZE);
+ if (err)
+ return err;
+ return crypto_shash_export(shash, hctr2_hashed_tweaklen(tctx, has_remainder));
+}
+
+static int hctr2_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ u8 hbar[BLOCKCIPHER_BLOCK_SIZE];
+ int err;
+
+ crypto_cipher_clear_flags(tctx->blockcipher, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(tctx->blockcipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(tctx->blockcipher, key, keylen);
+ if (err)
+ return err;
+
+ crypto_skcipher_clear_flags(tctx->xctr, CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(tctx->xctr,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_skcipher_setkey(tctx->xctr, key, keylen);
+ if (err)
+ return err;
+
+ memset(hbar, 0, sizeof(hbar));
+ crypto_cipher_encrypt_one(tctx->blockcipher, hbar, hbar);
+
+ memset(tctx->L, 0, sizeof(tctx->L));
+ tctx->L[0] = 0x01;
+ crypto_cipher_encrypt_one(tctx->blockcipher, tctx->L, tctx->L);
+
+ crypto_shash_clear_flags(tctx->polyval, CRYPTO_TFM_REQ_MASK);
+ crypto_shash_set_flags(tctx->polyval, crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_shash_setkey(tctx->polyval, hbar, BLOCKCIPHER_BLOCK_SIZE);
+ if (err)
+ return err;
+ memzero_explicit(hbar, sizeof(hbar));
+
+ return hctr2_hash_tweaklen(tctx, true) ?: hctr2_hash_tweaklen(tctx, false);
+}
+
+static int hctr2_hash_tweak(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+ struct shash_desc *hash_desc = &rctx->u.hash_desc;
+ int err;
+ bool has_remainder = req->cryptlen % POLYVAL_BLOCK_SIZE;
+
+ hash_desc->tfm = tctx->polyval;
+ err = crypto_shash_import(hash_desc, hctr2_hashed_tweaklen(tctx, has_remainder));
+ if (err)
+ return err;
+ err = crypto_shash_update(hash_desc, req->iv, TWEAK_SIZE);
+ if (err)
+ return err;
+
+ // Store the hashed tweak, since we need it when computing both
+ // H(T || N) and H(T || V).
+ return crypto_shash_export(hash_desc, hctr2_hashed_tweak(tctx, rctx));
+}
+
+static int hctr2_hash_message(struct skcipher_request *req,
+ struct scatterlist *sgl,
+ u8 digest[POLYVAL_DIGEST_SIZE])
+{
+ static const u8 padding[BLOCKCIPHER_BLOCK_SIZE] = { 0x1 };
+ struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+ struct shash_desc *hash_desc = &rctx->u.hash_desc;
+ const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+ struct sg_mapping_iter miter;
+ unsigned int remainder = bulk_len % BLOCKCIPHER_BLOCK_SIZE;
+ int i;
+ int err = 0;
+ int n = 0;
+
+ sg_miter_start(&miter, sgl, sg_nents(sgl),
+ SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+ for (i = 0; i < bulk_len; i += n) {
+ sg_miter_next(&miter);
+ n = min_t(unsigned int, miter.length, bulk_len - i);
+ err = crypto_shash_update(hash_desc, miter.addr, n);
+ if (err)
+ break;
+ }
+ sg_miter_stop(&miter);
+
+ if (err)
+ return err;
+
+ if (remainder) {
+ err = crypto_shash_update(hash_desc, padding,
+ BLOCKCIPHER_BLOCK_SIZE - remainder);
+ if (err)
+ return err;
+ }
+ return crypto_shash_final(hash_desc, digest);
+}
+
+static int hctr2_finish(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+ u8 digest[POLYVAL_DIGEST_SIZE];
+ struct shash_desc *hash_desc = &rctx->u.hash_desc;
+ int err;
+
+ // U = UU ^ H(T || V)
+ // or M = MM ^ H(T || N)
+ hash_desc->tfm = tctx->polyval;
+ err = crypto_shash_import(hash_desc, hctr2_hashed_tweak(tctx, rctx));
+ if (err)
+ return err;
+ err = hctr2_hash_message(req, rctx->bulk_part_dst, digest);
+ if (err)
+ return err;
+ crypto_xor(rctx->first_block, digest, BLOCKCIPHER_BLOCK_SIZE);
+
+ // Copy U (or M) into dst scatterlist
+ scatterwalk_map_and_copy(rctx->first_block, req->dst,
+ 0, BLOCKCIPHER_BLOCK_SIZE, 1);
+ return 0;
+}
+
+static void hctr2_xctr_done(struct crypto_async_request *areq,
+ int err)
+{
+ struct skcipher_request *req = areq->data;
+
+ if (!err)
+ err = hctr2_finish(req);
+
+ skcipher_request_complete(req, err);
+}
+
+static int hctr2_crypt(struct skcipher_request *req, bool enc)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+ u8 digest[POLYVAL_DIGEST_SIZE];
+ int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+ int err;
+
+ // Requests must be at least one block
+ if (req->cryptlen < BLOCKCIPHER_BLOCK_SIZE)
+ return -EINVAL;
+
+ // Copy M (or U) into a temporary buffer
+ scatterwalk_map_and_copy(rctx->first_block, req->src,
+ 0, BLOCKCIPHER_BLOCK_SIZE, 0);
+
+ // Create scatterlists for N and V
+ rctx->bulk_part_src = scatterwalk_ffwd(rctx->sg_src, req->src,
+ BLOCKCIPHER_BLOCK_SIZE);
+ rctx->bulk_part_dst = scatterwalk_ffwd(rctx->sg_dst, req->dst,
+ BLOCKCIPHER_BLOCK_SIZE);
+
+ // MM = M ^ H(T || N)
+ // or UU = U ^ H(T || V)
+ err = hctr2_hash_tweak(req);
+ if (err)
+ return err;
+ err = hctr2_hash_message(req, rctx->bulk_part_src, digest);
+ if (err)
+ return err;
+ crypto_xor(digest, rctx->first_block, BLOCKCIPHER_BLOCK_SIZE);
+
+ // UU = E(MM)
+ // or MM = D(UU)
+ if (enc)
+ crypto_cipher_encrypt_one(tctx->blockcipher, rctx->first_block,
+ digest);
+ else
+ crypto_cipher_decrypt_one(tctx->blockcipher, rctx->first_block,
+ digest);
+
+ // S = MM ^ UU ^ L
+ crypto_xor(digest, rctx->first_block, BLOCKCIPHER_BLOCK_SIZE);
+ crypto_xor_cpy(rctx->xctr_iv, digest, tctx->L, BLOCKCIPHER_BLOCK_SIZE);
+
+ // V = XCTR(S, N)
+ // or N = XCTR(S, V)
+ skcipher_request_set_tfm(&rctx->u.xctr_req, tctx->xctr);
+ skcipher_request_set_crypt(&rctx->u.xctr_req, rctx->bulk_part_src,
+ rctx->bulk_part_dst, bulk_len,
+ rctx->xctr_iv);
+ skcipher_request_set_callback(&rctx->u.xctr_req,
+ req->base.flags,
+ hctr2_xctr_done, req);
+ return crypto_skcipher_encrypt(&rctx->u.xctr_req) ?:
+ hctr2_finish(req);
+}
+
+static int hctr2_encrypt(struct skcipher_request *req)
+{
+ return hctr2_crypt(req, true);
+}
+
+static int hctr2_decrypt(struct skcipher_request *req)
+{
+ return hctr2_crypt(req, false);
+}
+
+static int hctr2_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+ struct hctr2_instance_ctx *ictx = skcipher_instance_ctx(inst);
+ struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct crypto_skcipher *xctr;
+ struct crypto_cipher *blockcipher;
+ struct crypto_shash *polyval;
+ unsigned int subreq_size;
+ int err;
+
+ xctr = crypto_spawn_skcipher(&ictx->xctr_spawn);
+ if (IS_ERR(xctr))
+ return PTR_ERR(xctr);
+
+ blockcipher = crypto_spawn_cipher(&ictx->blockcipher_spawn);
+ if (IS_ERR(blockcipher)) {
+ err = PTR_ERR(blockcipher);
+ goto err_free_xctr;
+ }
+
+ polyval = crypto_spawn_shash(&ictx->polyval_spawn);
+ if (IS_ERR(polyval)) {
+ err = PTR_ERR(polyval);
+ goto err_free_blockcipher;
+ }
+
+ tctx->xctr = xctr;
+ tctx->blockcipher = blockcipher;
+ tctx->polyval = polyval;
+
+ BUILD_BUG_ON(offsetofend(struct hctr2_request_ctx, u) !=
+ sizeof(struct hctr2_request_ctx));
+ subreq_size = max(sizeof_field(struct hctr2_request_ctx, u.hash_desc) +
+ crypto_shash_descsize(polyval),
+ sizeof_field(struct hctr2_request_ctx, u.xctr_req) +
+ crypto_skcipher_reqsize(xctr));
+
+ tctx->hashed_tweak_offset = offsetof(struct hctr2_request_ctx, u) +
+ subreq_size;
+ crypto_skcipher_set_reqsize(tfm, tctx->hashed_tweak_offset +
+ crypto_shash_statesize(polyval));
+ return 0;
+
+err_free_blockcipher:
+ crypto_free_cipher(blockcipher);
+err_free_xctr:
+ crypto_free_skcipher(xctr);
+ return err;
+}
+
+static void hctr2_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_cipher(tctx->blockcipher);
+ crypto_free_skcipher(tctx->xctr);
+ crypto_free_shash(tctx->polyval);
+}
+
+static void hctr2_free_instance(struct skcipher_instance *inst)
+{
+ struct hctr2_instance_ctx *ictx = skcipher_instance_ctx(inst);
+
+ crypto_drop_cipher(&ictx->blockcipher_spawn);
+ crypto_drop_skcipher(&ictx->xctr_spawn);
+ crypto_drop_shash(&ictx->polyval_spawn);
+ kfree(inst);
+}
+
+static int hctr2_create_common(struct crypto_template *tmpl,
+ struct rtattr **tb,
+ const char *xctr_name,
+ const char *polyval_name)
+{
+ u32 mask;
+ struct skcipher_instance *inst;
+ struct hctr2_instance_ctx *ictx;
+ struct skcipher_alg *xctr_alg;
+ struct crypto_alg *blockcipher_alg;
+ struct shash_alg *polyval_alg;
+ char blockcipher_name[CRYPTO_MAX_ALG_NAME];
+ int len;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER, &mask);
+ if (err)
+ return err;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+ if (!inst)
+ return -ENOMEM;
+ ictx = skcipher_instance_ctx(inst);
+
+ /* Stream cipher, xctr(block_cipher) */
+ err = crypto_grab_skcipher(&ictx->xctr_spawn,
+ skcipher_crypto_instance(inst),
+ xctr_name, 0, mask);
+ if (err)
+ goto err_free_inst;
+ xctr_alg = crypto_spawn_skcipher_alg(&ictx->xctr_spawn);
+
+ err = -EINVAL;
+ if (strncmp(xctr_alg->base.cra_name, "xctr(", 5))
+ goto err_free_inst;
+ len = strscpy(blockcipher_name, xctr_alg->base.cra_name + 5,
+ sizeof(blockcipher_name));
+ if (len < 1)
+ goto err_free_inst;
+ if (blockcipher_name[len - 1] != ')')
+ goto err_free_inst;
+ blockcipher_name[len - 1] = 0;
+
+ /* Block cipher, e.g. "aes" */
+ err = crypto_grab_cipher(&ictx->blockcipher_spawn,
+ skcipher_crypto_instance(inst),
+ blockcipher_name, 0, mask);
+ if (err)
+ goto err_free_inst;
+ blockcipher_alg = crypto_spawn_cipher_alg(&ictx->blockcipher_spawn);
+
+ /* Require blocksize of 16 bytes */
+ err = -EINVAL;
+ if (blockcipher_alg->cra_blocksize != BLOCKCIPHER_BLOCK_SIZE)
+ goto err_free_inst;
+
+ /* Polyval ε-∆U hash function */
+ err = crypto_grab_shash(&ictx->polyval_spawn,
+ skcipher_crypto_instance(inst),
+ polyval_name, 0, mask);
+ if (err)
+ goto err_free_inst;
+ polyval_alg = crypto_spawn_shash_alg(&ictx->polyval_spawn);
+
+ /* Ensure Polyval is being used */
+ err = -EINVAL;
+ if (strcmp(polyval_alg->base.cra_name, "polyval") != 0)
+ goto err_free_inst;
+
+ /* Instance fields */
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "hctr2(%s)",
+ blockcipher_alg->cra_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+ if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "hctr2_base(%s,%s)",
+ xctr_alg->base.cra_driver_name,
+ polyval_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+
+ inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
+ inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) +
+ polyval_alg->statesize * 2;
+ inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask |
+ polyval_alg->base.cra_alignmask;
+ /*
+ * The hash function is called twice, so it is weighted higher than the
+ * xctr and blockcipher.
+ */
+ inst->alg.base.cra_priority = (2 * xctr_alg->base.cra_priority +
+ 4 * polyval_alg->base.cra_priority +
+ blockcipher_alg->cra_priority) / 7;
+
+ inst->alg.setkey = hctr2_setkey;
+ inst->alg.encrypt = hctr2_encrypt;
+ inst->alg.decrypt = hctr2_decrypt;
+ inst->alg.init = hctr2_init_tfm;
+ inst->alg.exit = hctr2_exit_tfm;
+ inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(xctr_alg);
+ inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(xctr_alg);
+ inst->alg.ivsize = TWEAK_SIZE;
+
+ inst->free = hctr2_free_instance;
+
+ err = skcipher_register_instance(tmpl, inst);
+ if (err) {
+err_free_inst:
+ hctr2_free_instance(inst);
+ }
+ return err;
+}
+
+static int hctr2_create_base(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ const char *xctr_name;
+ const char *polyval_name;
+
+ xctr_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(xctr_name))
+ return PTR_ERR(xctr_name);
+
+ polyval_name = crypto_attr_alg_name(tb[2]);
+ if (IS_ERR(polyval_name))
+ return PTR_ERR(polyval_name);
+
+ return hctr2_create_common(tmpl, tb, xctr_name, polyval_name);
+}
+
+static int hctr2_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ const char *blockcipher_name;
+ char xctr_name[CRYPTO_MAX_ALG_NAME];
+
+ blockcipher_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(blockcipher_name))
+ return PTR_ERR(blockcipher_name);
+
+ if (snprintf(xctr_name, CRYPTO_MAX_ALG_NAME, "xctr(%s)",
+ blockcipher_name) >= CRYPTO_MAX_ALG_NAME)
+ return -ENAMETOOLONG;
+
+ return hctr2_create_common(tmpl, tb, xctr_name, "polyval");
+}
+
+static struct crypto_template hctr2_tmpls[] = {
+ {
+ /* hctr2_base(xctr_name, polyval_name) */
+ .name = "hctr2_base",
+ .create = hctr2_create_base,
+ .module = THIS_MODULE,
+ }, {
+ /* hctr2(blockcipher_name) */
+ .name = "hctr2",
+ .create = hctr2_create,
+ .module = THIS_MODULE,
+ }
+};
+
+static int __init hctr2_module_init(void)
+{
+ return crypto_register_templates(hctr2_tmpls, ARRAY_SIZE(hctr2_tmpls));
+}
+
+static void __exit hctr2_module_exit(void)
+{
+ return crypto_unregister_templates(hctr2_tmpls,
+ ARRAY_SIZE(hctr2_tmpls));
+}
+
+subsys_initcall(hctr2_module_init);
+module_exit(hctr2_module_exit);
+
+MODULE_DESCRIPTION("HCTR2 length-preserving encryption mode");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("hctr2");
+MODULE_IMPORT_NS(CRYPTO_INTERNAL);
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index dd9cf216029b..336598da8eac 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -2191,6 +2191,11 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
16, 16, aead_speed_template_19, num_mb);
break;

+ case 226:
+ test_cipher_speed("hctr2(aes)", ENCRYPT, sec, NULL,
+ 0, speed_template_32);
+ break;
+
case 300:
if (alg) {
test_hash_speed(alg, sec, generic_hash_speed_template);
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index d807b200edf6..3244b7e5aa7e 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5030,6 +5030,14 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.hash = __VECS(ghash_tv_template)
}
+ }, {
+ .alg = "hctr2(aes)",
+ .generic_driver =
+ "hctr2_base(xctr(aes-generic),polyval-generic)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = __VECS(aes_hctr2_tv_template)
+ }
}, {
.alg = "hmac(md5)",
.test = alg_test_hash,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index c581e5405916..13c70dde2f89 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -35100,4 +35100,676 @@ static const struct hash_testvec polyval_tv_template[] = {

};

+/*
+ * Test vectors generated using https://github.com/google/hctr2
+ */
+static const struct cipher_testvec aes_hctr2_tv_template[] = {
+ {
+ .key = "\xe1\x15\x66\x3c\x8d\xc6\x3a\xff"
+ "\xef\x41\xd7\x47\xa2\xcc\x8a\xba",
+ .iv = "\xc3\xbe\x2a\xcb\xb5\x39\x86\xf1"
+ "\x91\xad\x6c\xf4\xde\x74\x45\x63"
+ "\x5c\x7a\xd5\xcc\x8b\x76\xef\x0e"
+ "\xcf\x2c\x60\x69\x37\xfd\x07\x96",
+ .ptext = "\x65\x75\xae\xd3\xe2\xbc\x43\x5c"
+ "\xb3\x1a\xd8\x05\xc3\xd0\x56\x29",
+ .ctext = "\x11\x91\xea\x74\x58\xcc\xd5\xa2"
+ "\xd0\x55\x9e\x3d\xfe\x7f\xc8\xfe",
+ .klen = 16,
+ .len = 16,
+ },
+ {
+ .key = "\xe7\xd1\x77\x48\x76\x0b\xcd\x34"
+ "\x2a\x2d\xe7\x74\xca\x11\x9c\xae",
+ .iv = "\x71\x1c\x49\x62\xd9\x5b\x50\x5e"
+ "\x68\x87\xbc\xf6\x89\xff\xed\x30"
+ "\xe4\xe5\xbd\xb6\x10\x4f\x9f\x66"
+ "\x28\x06\x5a\xf4\x27\x35\xcd\xe5",
+ .ptext = "\x87\x03\x8f\x06\xa8\x61\x54\xda"
+ "\x01\x45\xd4\x01\xef\x4a\x22\xcf"
+ "\x78\x15\x9f\xbd\x64\xbd\x2c\xb9"
+ "\x40\x1d\x72\xae\x53\x63\xa5",
+ .ctext = "\x4e\xa1\x05\x27\xb8\x45\xe4\xa1"
+ "\xbb\x30\xb4\xa6\x12\x74\x63\xd6"
+ "\x17\xc9\xcc\x2f\x18\x64\xe0\x06"
+ "\x0a\xa0\xff\x72\x10\x7b\x22",
+ .klen = 16,
+ .len = 31,
+ },
+ {
+ .key = "\x59\x65\x3b\x1d\x43\x5e\xc0\xae"
+ "\xb8\x9d\x9b\xdd\x22\x03\xbf\xca",
+ .iv = "\xec\x95\xfa\x5a\xcf\x5e\xd2\x93"
+ "\xa3\xb5\xe5\xbe\xf3\x01\x7b\x01"
+ "\xd1\xca\x6c\x06\x82\xf0\xbd\x67"
+ "\xd9\x6c\xa4\xdc\xb4\x38\x0f\x74",
+ .ptext = "\x45\xdf\x75\x87\xbc\x72\xce\x55"
+ "\xc9\xfa\xcb\xfc\x9f\x40\x82\x2b"
+ "\xc6\x4f\x4f\x5b\x8b\x3b\x6d\x67"
+ "\xa6\x93\x62\x89\x8c\x19\xf4\xe3"
+ "\x08\x92\x9c\xc9\x47\x2c\x6e\xd0"
+ "\xa3\x02\x2b\xdb\x2c\xf2\x8d\x46"
+ "\xcd\xb0\x9d\x26\x63\x4c\x40\x6b"
+ "\x79\x43\xe5\xce\x42\xa8\xec\x3b"
+ "\x5b\xd0\xea\xa4\xe6\xdb\x66\x55"
+ "\x7a\x76\xec\xab\x7d\x2a\x2b\xbd"
+ "\xa9\xab\x22\x64\x1a\xa1\xae\x84"
+ "\x86\x79\x67\xe9\xb2\x50\xbe\x12"
+ "\x2f\xb2\x14\xf0\xdb\x71\xd8\xa7"
+ "\x41\x8a\x88\xa0\x6a\x6e\x9d\x2a"
+ "\xfa\x11\x37\x40\x32\x09\x4c\x47"
+ "\x41\x07\x31\x85\x3d\xa8\xf7\x64",
+ .ctext = "\x2d\x4b\x9f\x93\xca\x5a\x48\x26"
+ "\x01\xcc\x54\xe4\x31\x50\x12\xf0"
+ "\x49\xff\x59\x42\x68\xbd\x87\x8f"
+ "\x9e\x62\x96\xcd\xb9\x24\x57\xa4"
+ "\x0b\x7b\xf5\x2e\x0e\xa8\x65\x07"
+ "\xab\x05\xd5\xca\xe7\x9c\x6c\x34"
+ "\x5d\x42\x34\xa4\x62\xe9\x75\x48"
+ "\x3d\x9e\x8f\xfa\x42\xe9\x75\x08"
+ "\x4e\x54\x91\x2b\xbd\x11\x0f\x8e"
+ "\xf0\x82\xf5\x24\xf1\xc4\xfc\xae"
+ "\x42\x54\x7f\xce\x15\xa8\xb2\x33"
+ "\xc0\x86\xb6\x2b\xe8\x44\xce\x1f"
+ "\x68\x57\x66\x94\x6e\xad\xeb\xf3"
+ "\x30\xf8\x11\xbd\x60\x00\xc6\xd5"
+ "\x4c\x81\xf1\x20\x2b\x4a\x5b\x99"
+ "\x79\x3b\xc9\x5c\x74\x23\xe6\x5d",
+ .klen = 16,
+ .len = 128,
+ },
+ {
+ .key = "\x3e\x08\x5d\x64\x6c\x98\xec\xec"
+ "\x70\x0e\x0d\xa1\x41\x20\x99\x82",
+ .iv = "\x11\xb7\x77\x91\x0d\x99\xd9\x8d"
+ "\x35\x3a\xf7\x14\x6b\x09\x37\xe5"
+ "\xad\x51\xf6\xc3\x96\x4b\x64\x56"
+ "\xa8\xbd\x81\xcc\xbe\x94\xaf\xe4",
+ .ptext = "\xff\x8d\xb9\xc0\xe3\x69\xb3\xb2"
+ "\x8b\x11\x26\xb3\x11\xec\xfb\xb9"
+ "\x9c\xc1\x71\xd6\xe3\x26\x0e\xe0"
+ "\x68\x40\x60\xb9\x3a\x63\x56\x8a"
+ "\x9e\xc1\xf0\x10\xb1\x64\x32\x70"
+ "\xf8\xcd\xc6\xc4\x49\x4c\xe1\xce"
+ "\xf3\xe1\x03\xf8\x35\xae\xe0\x5e"
+ "\xef\x5f\xbc\x41\x75\x26\x13\xcc"
+ "\x37\x85\xdf\xc0\x5d\xa6\x47\x98"
+ "\xf1\x97\x52\x58\x04\xe6\xb5\x01"
+ "\xc0\xb8\x17\x6d\x74\xbd\x9a\xdf"
+ "\xa4\x37\x94\x86\xb0\x13\x83\x28"
+ "\xc9\xa2\x07\x3f\xb5\xb2\x72\x40"
+ "\x0e\x60\xdf\x57\x07\xb7\x2c\x66"
+ "\x10\x3f\x8d\xdd\x30\x0a\x47\xd5"
+ "\xe8\x9d\xfb\xa1\xaf\x53\xd7\x05"
+ "\xc7\xd2\xba\xe7\x2c\xa0\xbf\xb8"
+ "\xd1\x93\xe7\x41\x82\xa3\x41\x3a"
+ "\xaf\x12\xd6\xf8\x34\xda\x92\x46"
+ "\xad\xa2\x2f\xf6\x7e\x46\x96\xd8"
+ "\x03\xf3\x49\x64\xde\xd8\x06\x8b"
+ "\xa0\xbc\x63\x35\x38\xb6\x6b\xda"
+ "\x5b\x50\x3f\x13\xa5\x84\x1b\x1b"
+ "\x66\x89\x95\xb7\xc2\x16\x3c\xe9"
+ "\x24\xb0\x8c\x6f\x49\xef\xf7\x28"
+ "\x6a\x24\xfd\xbe\x25\xe2\xb4\x90"
+ "\x77\x44\x08\xb8\xda\xd2\xde\x2c"
+ "\xa0\x57\x45\x57\x29\x47\x6b\x89"
+ "\x4a\xf6\xa7\x2a\xc3\x9e\x7b\xc8"
+ "\xfd\x9f\x89\xab\xee\x6d\xa3\xb4"
+ "\x23\x90\x7a\xe9\x89\xa0\xc7\xb3"
+ "\x17\x41\x87\x91\xfc\x97\x42",
+ .ctext = "\xfc\x9b\x96\x66\xc4\x82\x2a\x4a"
+ "\xb1\x24\xba\xc7\x78\x5f\x79\xc1"
+ "\x57\x2e\x47\x29\x4d\x7b\xd2\x9a"
+ "\xbd\xc6\xc1\x26\x7b\x8e\x3f\x5d"
+ "\xd4\xb4\x9f\x6a\x02\x24\x4a\xad"
+ "\x0c\x00\x1b\xdf\x92\xc5\x8a\xe1"
+ "\x77\x79\xcc\xd5\x20\xbf\x83\xf4"
+ "\x4b\xad\x11\xbf\xdb\x47\x65\x70"
+ "\x43\xf3\x65\xdf\xb7\xdc\xb2\xb9"
+ "\xaa\x3f\xb3\xdf\x79\x69\x0d\xa0"
+ "\x86\x1c\xba\x48\x0b\x01\xc1\x88"
+ "\xdf\x03\xb1\x06\x3c\x1d\x56\xa1"
+ "\x8e\x98\xc1\xa6\x95\xa2\x5b\x72"
+ "\x76\x59\xd2\x26\x25\xcd\xef\x7c"
+ "\xc9\x60\xea\x43\xd1\x12\x8a\x8a"
+ "\x63\x12\x78\xcb\x2f\x88\x1e\x88"
+ "\x78\x59\xde\xba\x4d\x2c\x78\x61"
+ "\x75\x37\x54\xfd\x80\xc7\x5e\x98"
+ "\xcf\x14\x62\x8e\xfb\x72\xee\x4d"
+ "\x9f\xaf\x8b\x09\xe5\x21\x0a\x91"
+ "\x8f\x88\x87\xd5\xb1\x84\xab\x18"
+ "\x08\x57\xed\x72\x35\xa6\x0e\xc6"
+ "\xff\xcb\xfe\x2c\x48\x39\x14\x44"
+ "\xba\x59\x32\x3a\x2d\xc4\x5f\xcb"
+ "\xbe\x68\x8e\x7b\xee\x21\xa4\x32"
+ "\x11\xa0\x99\xfd\x90\xde\x59\x43"
+ "\xeb\xed\xd5\x87\x68\x46\xc6\xde"
+ "\x0b\x07\x17\x59\x6a\xab\xca\x15"
+ "\x65\x02\x01\xb6\x71\x8c\x3b\xaa"
+ "\x18\x3b\x30\xae\x38\x5b\x2c\x74"
+ "\xd4\xee\x4a\xfc\xf7\x1b\x09\xd4"
+ "\xda\x8b\x1d\x5d\x6f\x21\x6c",
+ .klen = 16,
+ .len = 255,
+ },
+ {
+ .key = "\x24\xf6\xe1\x62\xe5\xaf\x99\xda"
+ "\x84\xec\x41\xb0\xa3\x0b\xd5\xa8"
+ "\xa0\x3e\x7b\xa6\xdd\x6c\x8f\xa8",
+ .iv = "\x7f\x80\x24\x62\x32\xdd\xab\x66"
+ "\xf2\x87\x29\x24\xec\xd2\x4b\x9f"
+ "\x0c\x33\x52\xd9\xe0\xcc\x6e\xe4"
+ "\x90\x85\x43\x97\xc4\x62\x14\x33",
+ .ptext = "\xef\x58\xe7\x7f\xa9\xd9\xb8\xd7"
+ "\xa2\x91\x97\x07\x27\x9e\xba\xe8"
+ "\xaa",
+ .ctext = "\xd7\xc3\x81\x91\xf2\x40\x17\x73"
+ "\x3e\x3b\x1c\x2a\x8e\x11\x9c\x17"
+ "\xf1",
+ .klen = 24,
+ .len = 17,
+ },
+ {
+ .key = "\xbf\xaf\xd7\x67\x8c\x47\xcf\x21"
+ "\x8a\xa5\xdd\x32\x25\x47\xbe\x4f"
+ "\xf1\x3a\x0b\xa6\xaa\x2d\xcf\x09",
+ .iv = "\xd9\xe8\xf0\x92\x4e\xfc\x1d\xf2"
+ "\x81\x37\x7c\x8f\xf1\x59\x09\x20"
+ "\xf4\x46\x51\x86\x4f\x54\x8b\x32"
+ "\x58\xd1\x99\x8b\x8c\x03\xeb\x5d",
+ .ptext = "\xcd\x64\x90\xf9\x7c\xe5\x0e\x5a"
+ "\x75\xe7\x8e\x39\x86\xec\x20\x43"
+ "\x8a\x49\x09\x15\x47\xf4\x3c\x89"
+ "\x21\xeb\xcf\x4e\xcf\x91\xb5\x40"
+ "\xcd\xe5\x4d\x5c\x6f\xf2\xd2\x80"
+ "\xfa\xab\xb3\x76\x9f\x7f\x84\x0a",
+ .ctext = "\x44\x98\x64\x15\xb7\x0b\x80\xa3"
+ "\xb9\xca\x23\xff\x3b\x0b\x68\x74"
+ "\xbb\x3e\x20\x19\x9f\x28\x71\x2a"
+ "\x48\x3c\x7c\xe2\xef\xb5\x10\xac"
+ "\x82\x9f\xcd\x08\x8f\x6b\x16\x6f"
+ "\xc3\xbb\x07\xfb\x3c\xb0\x1b\x27",
+ .klen = 24,
+ .len = 48,
+ },
+ {
+ .key = "\xb8\x35\xa2\x5f\x86\xbb\x82\x99"
+ "\x27\xeb\x01\x3f\x92\xaf\x80\x24"
+ "\x4c\x66\xa2\x89\xff\x2e\xa2\x25",
+ .iv = "\x0a\x1d\x96\xd3\xe0\xe8\x0c\x9b"
+ "\x9d\x6f\x21\x97\xc2\x17\xdb\x39"
+ "\x3f\xd8\x64\x48\x80\x04\xee\x43"
+ "\x02\xce\x88\xe2\x81\x81\x5f\x81",
+ .ptext = "\xb8\xf9\x16\x8b\x25\x68\xd0\x9c"
+ "\xd2\x28\xac\xa8\x79\xc2\x30\xc1"
+ "\x31\xde\x1c\x37\x1b\xa2\xb5\xe6"
+ "\xf0\xd0\xf8\x9c\x7f\xc6\x46\x07"
+ "\x5c\xc3\x06\xe4\xf0\x02\xec\xf8"
+ "\x59\x7c\xc2\x5d\xf8\x0c\x21\xae"
+ "\x9e\x82\xb1\x1a\x5f\x78\x44\x15"
+ "\x00\xa7\x2e\x52\xc5\x98\x98\x35"
+ "\x03\xae\xd0\x8e\x07\x57\xe2\x5a"
+ "\x17\xbf\x52\x40\x54\x5b\x74\xe5"
+ "\x2d\x35\xaf\x9e\x37\xf7\x7e\x4a"
+ "\x8c\x9e\xa1\xdc\x40\xb4\x5b\x36"
+ "\xdc\x3a\x68\xe6\xb7\x35\x0b\x8a"
+ "\x90\xec\x74\x8f\x09\x9a\x7f\x02"
+ "\x4d\x03\x46\x35\x62\xb1\xbd\x08"
+ "\x3f\x54\x2a\x10\x0b\xdc\x69\xaf"
+ "\x25\x3a\x0c\x5f\xe0\x51\xe7\x11"
+ "\xb7\x00\xab\xbb\x9a\xb0\xdc\x4d"
+ "\xc3\x7d\x1a\x6e\xd1\x09\x52\xbd"
+ "\x6b\x43\x55\x22\x3a\x78\x14\x7d"
+ "\x79\xfd\x8d\xfc\x9b\x1d\x0f\xa2"
+ "\xc7\xb9\xf8\x87\xd5\x96\x50\x61"
+ "\xa7\x5e\x1e\x57\x97\xe0\xad\x2f"
+ "\x93\xe6\xe8\x83\xec\x85\x26\x5e"
+ "\xd9\x2a\x15\xe0\xe9\x09\x25\xa1"
+ "\x77\x2b\x88\xdc\xa4\xa5\x48\xb6"
+ "\xf7\xcc\xa6\xa9\xba\xf3\x42\x5c"
+ "\x70\x9d\xe9\x29\xc1\xf1\x33\xdd"
+ "\x56\x48\x17\x86\x14\x51\x5c\x10"
+ "\xab\xfd\xd3\x26\x8c\x21\xf5\x93"
+ "\x1b\xeb\x47\x97\x73\xbb\x88\x10"
+ "\xf3\xfe\xf5\xde\xf3\x2e\x05\x46"
+ "\x1c\x0d\xa3\x10\x48\x9c\x71\x16"
+ "\x78\x33\x4d\x0a\x74\x3b\xe9\x34"
+ "\x0b\xa7\x0e\x9e\x61\xe9\xe9\xfd"
+ "\x85\xa0\xcb\x19\xfd\x7c\x33\xe3"
+ "\x0e\xce\xc2\x6f\x9d\xa4\x2d\x77"
+ "\xfd\xad\xee\x5e\x08\x3e\xd7\xf5"
+ "\xfb\xc3\xd7\x93\x96\x08\x96\xca"
+ "\x58\x81\x16\x9b\x98\x0a\xe2\xef"
+ "\x7f\xda\x40\xe4\x1f\x46\x9e\x67"
+ "\x2b\x84\xcb\x42\xc4\xd6\x6a\xcf"
+ "\x2d\xb2\x33\xc0\x56\xb3\x35\x6f"
+ "\x29\x36\x8f\x6a\x5b\xec\xd5\x4f"
+ "\xa0\x70\xff\xb6\x5b\xde\x6a\x93"
+ "\x20\x3c\xe2\x76\x7a\xef\x3c\x79"
+ "\x31\x65\xce\x3a\x0e\xd0\xbe\xa8"
+ "\x21\x95\xc7\x2b\x62\x8e\x67\xdd"
+ "\x20\x79\xe4\xe5\x01\x15\xc0\xec"
+ "\x0f\xd9\x23\xc8\xca\xdf\xd4\x7d"
+ "\x1d\xf8\x64\x4f\x56\xb1\x83\xa7"
+ "\x43\xbe\xfc\xcf\xc2\x8c\x33\xda"
+ "\x36\xd0\x52\xef\x9e\x9e\x88\xf4"
+ "\xa8\x21\x0f\xaa\xee\x8d\xa0\x24"
+ "\x4d\xcb\xb1\x72\x07\xf0\xc2\x06"
+ "\x60\x65\x85\x84\x2c\x60\xcf\x61"
+ "\xe7\x56\x43\x5b\x2b\x50\x74\xfa"
+ "\xdb\x4e\xea\x88\xd4\xb3\x83\x8f"
+ "\x6f\x97\x4b\x57\x7a\x64\x64\xae"
+ "\x0a\x37\x66\xc5\x03\xad\xb5\xf9"
+ "\x08\xb0\x3a\x74\xde\x97\x51\xff"
+ "\x48\x4f\x5c\xa4\xf8\x7a\xb4\x05"
+ "\x27\x70\x52\x86\x1b\x78\xfc\x18"
+ "\x06\x27\xa9\x62\xf7\xda\xd2\x8e",
+ .ctext = "\x3b\xe1\xdb\xb3\xc5\x9a\xde\x69"
+ "\x58\x05\xcc\xeb\x02\x51\x78\x4a"
+ "\xac\x28\xe9\xed\xd1\xc9\x15\x7d"
+ "\x33\x7d\xc1\x47\x12\x41\x11\xf8"
+ "\x4a\x2c\xb7\xa3\x41\xbe\x59\xf7"
+ "\x22\xdb\x2c\xda\x9c\x00\x61\x9b"
+ "\x73\xb3\x0b\x84\x2b\xc1\xf3\x80"
+ "\x84\xeb\x19\x60\x80\x09\xe1\xcd"
+ "\x16\x3a\x20\x23\xc4\x82\x4f\xba"
+ "\x3b\x8e\x55\xd7\xa9\x0b\x75\xd0"
+ "\xda\xce\xd2\xee\x7e\x4b\x7f\x65"
+ "\x4d\x28\xc5\xd3\x15\x2c\x40\x96"
+ "\x52\xd4\x18\x61\x2b\xe7\x83\xec"
+ "\x89\x62\x9c\x4c\x50\xe6\xe2\xbb"
+ "\x25\xa1\x0f\xa7\xb0\xb4\xb2\xde"
+ "\x54\x20\xae\xa3\x56\xa5\x26\x4c"
+ "\xd5\xcc\xe5\xcb\x28\x44\xb1\xef"
+ "\x67\x2e\x93\x6d\x00\x88\x83\x9a"
+ "\xf2\x1c\x48\x38\xec\x1a\x24\x90"
+ "\x73\x0a\xdb\xe8\xce\x95\x7a\x2c"
+ "\x8c\xe9\xb7\x07\x1d\xb3\xa3\x20"
+ "\xbe\xad\x61\x84\xac\xde\x76\xb5"
+ "\xa6\x28\x29\x47\x63\xc4\xfc\x13"
+ "\x3f\x71\xfb\x58\x37\x34\x82\xed"
+ "\x9e\x05\x19\x1f\xc1\x67\xc1\xab"
+ "\xf5\xfd\x7c\xea\xfa\xa4\xf8\x0a"
+ "\xac\x4c\x92\xdf\x65\x73\xd7\xdb"
+ "\xed\x2c\xe0\x84\x5f\x57\x8c\x76"
+ "\x3e\x05\xc0\xc3\x68\x96\x95\x0b"
+ "\x88\x97\xfe\x2e\x99\xd5\xc2\xb9"
+ "\x53\x9f\xf3\x32\x10\x1f\x1f\x5d"
+ "\xdf\x21\x95\x70\x91\xe8\xa1\x3e"
+ "\x19\x3e\xb6\x0b\xa8\xdb\xf8\xd4"
+ "\x54\x27\xb8\xab\x5d\x78\x0c\xe6"
+ "\xb7\x08\xee\xa4\xb6\x6b\xeb\x5a"
+ "\x89\x69\x2b\xbd\xd4\x21\x5b\xbf"
+ "\x79\xbb\x0f\xff\xdb\x23\x9a\xeb"
+ "\x8d\xf2\xc4\x39\xb4\x90\x77\x6f"
+ "\x68\xe2\xb8\xf3\xf1\x65\x4f\xd5"
+ "\x24\x80\x06\xaf\x7c\x8d\x15\x0c"
+ "\xfd\x56\xe5\xe3\x01\xa5\xf7\x1c"
+ "\x31\xd6\xa2\x01\x1e\x59\xf9\xa9"
+ "\x42\xd5\xc2\x34\xda\x25\xde\xc6"
+ "\x5d\x38\xef\xd1\x4c\xc1\xd9\x1b"
+ "\x98\xfd\xcd\x57\x6f\xfd\x46\x91"
+ "\x90\x3d\x52\x2b\x2c\x7d\xcf\x71"
+ "\xcf\xd1\x77\x23\x71\x36\xb1\xce"
+ "\xc7\x5d\xf0\x5b\x44\x3d\x43\x71"
+ "\xac\xb8\xa0\x6a\xea\x89\x5c\xff"
+ "\x81\x73\xd4\x83\xd1\xc9\xe9\xe2"
+ "\xa8\xa6\x0f\x36\xe6\xaa\x57\xd4"
+ "\x27\xd2\xc9\xda\x94\x02\x1f\xfb"
+ "\xe1\xa1\x07\xbe\xe1\x1b\x15\x94"
+ "\x1e\xac\x2f\x57\xbb\x41\x22\xaf"
+ "\x60\x5e\xcc\x66\xcb\x16\x62\xab"
+ "\xb8\x7c\x99\xf4\x84\x93\x0c\xc2"
+ "\xa2\x49\xe4\xfd\x17\x55\xe1\xa6"
+ "\x8d\x5b\xc6\x1b\xc8\xac\xec\x11"
+ "\x33\xcf\xb0\xe8\xc7\x28\x4f\xb2"
+ "\x5c\xa6\xe2\x71\xab\x80\x0a\xa7"
+ "\x5c\x59\x50\x9f\x7a\x32\xb7\xe5"
+ "\x24\x9a\x8e\x25\x21\x2e\xb7\x18"
+ "\xd0\xf2\xe7\x27\x6f\xda\xc1\x00"
+ "\xd9\xa6\x03\x59\xac\x4b\xcb\xba",
+ .klen = 24,
+ .len = 512,
+ },
+ {
+ .key = "\x9e\xeb\xb2\x49\x3c\x1c\xf5\xf4"
+ "\x6a\x99\xc2\xc4\xdf\xb1\xf4\xdd"
+ "\x75\x20\x57\xea\x2c\x4f\xcd\xb2"
+ "\xa5\x3d\x7b\x49\x1e\xab\xfd\x0f",
+ .iv = "\xdf\x63\xd4\xab\xd2\x49\xf3\xd8"
+ "\x33\x81\x37\x60\x7d\xfa\x73\x08"
+ "\xd8\x49\x6d\x80\xe8\x2f\x62\x54"
+ "\xeb\x0e\xa9\x39\x5b\x45\x7f\x8a",
+ .ptext = "\x67\xc9\xf2\x30\x84\x41\x8e\x43"
+ "\xfb\xf3\xb3\x3e\x79\x36\x7f\xe8",
+ .ctext = "\x27\x38\x78\x47\x16\xd9\x71\x35"
+ "\x2e\x7e\xdd\x7e\x43\x3c\xb8\x40",
+ .klen = 32,
+ .len = 16,
+ },
+ {
+ .key = "\x93\xfa\x7e\xe2\x0e\x67\xc4\x39"
+ "\xe7\xca\x47\x95\x68\x9d\x5e\x5a"
+ "\x7c\x26\x19\xab\xc6\xca\x6a\x4c"
+ "\x45\xa6\x96\x42\xae\x6c\xff\xe7",
+ .iv = "\xea\x82\x47\x95\x3b\x22\xa1\x3a"
+ "\x6a\xca\x24\x4c\x50\x7e\x23\xcd"
+ "\x0e\x50\xe5\x41\xb6\x65\x29\xd8"
+ "\x30\x23\x00\xd2\x54\xa7\xd6\x56",
+ .ptext = "\xdb\x1f\x1f\xec\xad\x83\x6e\x5d"
+ "\x19\xa5\xf6\x3b\xb4\x93\x5a\x57"
+ "\x6f",
+ .ctext = "\xf1\x46\x6e\x9d\xb3\x01\xf0\x6b"
+ "\xc2\xac\x57\x88\x48\x6d\x40\x72"
+ "\x68",
+ .klen = 32,
+ .len = 17,
+ },
+ {
+ .key = "\x36\x2b\x57\x97\xf8\x5d\xcd\x99"
+ "\x5f\x1a\x5a\x44\x1d\x92\x0f\x27"
+ "\xcc\x16\xd7\x2b\x85\x63\x99\xd3"
+ "\xba\x96\xa1\xdb\xd2\x60\x68\xda",
+ .iv = "\xef\x58\x69\xb1\x2c\x5e\x9a\x47"
+ "\x24\xc1\xb1\x69\xe1\x12\x93\x8f"
+ "\x43\x3d\x6d\x00\xdb\x5e\xd8\xd9"
+ "\x12\x9a\xfe\xd9\xff\x2d\xaa\xc4",
+ .ptext = "\x5e\xa8\x68\x19\x85\x98\x12\x23"
+ "\x26\x0a\xcc\xdb\x0a\x04\xb9\xdf"
+ "\x4d\xb3\x48\x7b\xb0\xe3\xc8\x19"
+ "\x43\x5a\x46\x06\x94\x2d\xf2",
+ .ctext = "\xdb\xfd\xc8\x03\xd0\xec\xc1\xfe"
+ "\xbd\x64\x37\xb8\x82\x43\x62\x4e"
+ "\x7e\x54\xa3\xe2\x24\xa7\x27\xe8"
+ "\xa4\xd5\xb3\x6c\xb2\x26\xb4",
+ .klen = 32,
+ .len = 31,
+ },
+ {
+ .key = "\x03\x65\x03\x6e\x4d\xe6\xe8\x4e"
+ "\x8b\xbe\x22\x19\x48\x31\xee\xd9"
+ "\xa0\x91\x21\xbe\x62\x89\xde\x78"
+ "\xd9\xb0\x36\xa3\x3c\xce\x43\xd5",
+ .iv = "\xa9\xc3\x4b\xe7\x0f\xfc\x6d\xbf"
+ "\x56\x27\x21\x1c\xfc\xd6\x04\x10"
+ "\x5f\x43\xe2\x30\x35\x29\x6c\x10"
+ "\x90\xf1\xbf\x61\xed\x0f\x8a\x91",
+ .ptext = "\x07\xaa\x02\x26\xb4\x98\x11\x5e"
+ "\x33\x41\x21\x51\x51\x63\x2c\x72"
+ "\x00\xab\x32\xa7\x1c\xc8\x3c\x9c"
+ "\x25\x0e\x8b\x9a\xdf\x85\xed\x2d"
+ "\xf4\xf2\xbc\x55\xca\x92\x6d\x22"
+ "\xfd\x22\x3b\x42\x4c\x0b\x74\xec",
+ .ctext = "\x7b\xb1\x43\x6d\xd8\x72\x6c\xf6"
+ "\x67\x6a\x00\xc4\xf1\xf0\xf5\xa4"
+ "\xfc\x60\x91\xab\x46\x0b\x15\xfc"
+ "\xd7\xc1\x28\x15\xa1\xfc\xf7\x68"
+ "\x8e\xcc\x27\x62\x00\x64\x56\x72"
+ "\xa6\x17\xd7\x3f\x67\x80\x10\x58",
+ .klen = 32,
+ .len = 48,
+ },
+ {
+ .key = "\xa5\x28\x24\x34\x1a\x3c\xd8\xf7"
+ "\x05\x91\x8f\xee\x85\x1f\x35\x7f"
+ "\x80\x3d\xfc\x9b\x94\xf6\xfc\x9e"
+ "\x19\x09\x00\xa9\x04\x31\x4f\x11",
+ .iv = "\xa1\xba\x49\x95\xff\x34\x6d\xb8"
+ "\xcd\x87\x5d\x5e\xfd\xea\x85\xdb"
+ "\x8a\x7b\x5e\xb2\x5d\x57\xdd\x62"
+ "\xac\xa9\x8c\x41\x42\x94\x75\xb7",
+ .ptext = "\x69\xb4\xe8\x8c\x37\xe8\x67\x82"
+ "\xf1\xec\x5d\x04\xe5\x14\x91\x13"
+ "\xdf\xf2\x87\x1b\x69\x81\x1d\x71"
+ "\x70\x9e\x9c\x3b\xde\x49\x70\x11"
+ "\xa0\xa3\xdb\x0d\x54\x4f\x66\x69"
+ "\xd7\xdb\x80\xa7\x70\x92\x68\xce"
+ "\x81\x04\x2c\xc6\xab\xae\xe5\x60"
+ "\x15\xe9\x6f\xef\xaa\x8f\xa7\xa7"
+ "\x63\x8f\xf2\xf0\x77\xf1\xa8\xea"
+ "\xe1\xb7\x1f\x9e\xab\x9e\x4b\x3f"
+ "\x07\x87\x5b\x6f\xcd\xa8\xaf\xb9"
+ "\xfa\x70\x0b\x52\xb8\xa8\xa7\x9e"
+ "\x07\x5f\xa6\x0e\xb3\x9b\x79\x13"
+ "\x79\xc3\x3e\x8d\x1c\x2c\x68\xc8"
+ "\x51\x1d\x3c\x7b\x7d\x79\x77\x2a"
+ "\x56\x65\xc5\x54\x23\x28\xb0\x03",
+ .ctext = "\xeb\xf9\x98\x86\x3c\x40\x9f\x16"
+ "\x84\x01\xf9\x06\x0f\xeb\x3c\xa9"
+ "\x4c\xa4\x8e\x5d\xc3\x8d\xe5\xd3"
+ "\xae\xa6\xe6\xcc\xd6\x2d\x37\x4f"
+ "\x99\xc8\xa3\x21\x46\xb8\x69\xf2"
+ "\xe3\x14\x89\xd7\xb9\xf5\x9e\x4e"
+ "\x07\x93\x6f\x78\x8e\x6b\xea\x8f"
+ "\xfb\x43\xb8\x3e\x9b\x4c\x1d\x7e"
+ "\x20\x9a\xc5\x87\xee\xaf\xf6\xf9"
+ "\x46\xc5\x18\x8a\xe8\x69\xe7\x96"
+ "\x52\x55\x5f\x00\x1e\x1a\xdc\xcc"
+ "\x13\xa5\xee\xff\x4b\x27\xca\xdc"
+ "\x10\xa6\x48\x76\x98\x43\x94\xa3"
+ "\xc7\xe2\xc9\x65\x9b\x08\x14\x26"
+ "\x1d\x68\xfb\x15\x0a\x33\x49\x84"
+ "\x84\x33\x5a\x1b\x24\x46\x31\x92",
+ .klen = 32,
+ .len = 128,
+ },
+ {
+ .key = "\x36\x45\x11\xa2\x98\x5f\x96\x7c"
+ "\xc6\xb4\x94\x31\x0a\x67\x09\x32"
+ "\x6c\x6f\x6f\x00\xf0\x17\xcb\xac"
+ "\xa5\xa9\x47\x9e\x2e\x85\x2f\xfa",
+ .iv = "\x28\x88\xaa\x9b\x59\x3b\x1e\x97"
+ "\x82\xe5\x5c\x9e\x6d\x14\x11\x19"
+ "\x6e\x38\x8f\xd5\x40\x2b\xca\xf9"
+ "\x7b\x4c\xe4\xa3\xd0\xd2\x8a\x13",
+ .ptext = "\x95\xd2\xf7\x71\x1b\xca\xa5\x86"
+ "\xd9\x48\x01\x93\x2f\x79\x55\x29"
+ "\x71\x13\x15\x0e\xe6\x12\xbc\x4d"
+ "\x8a\x31\xe3\x40\x2a\xc6\x5e\x0d"
+ "\x68\xbb\x4a\x62\x8d\xc7\x45\x77"
+ "\xd2\xb8\xc7\x1d\xf1\xd2\x5d\x97"
+ "\xcf\xac\x52\xe5\x32\x77\xb6\xda"
+ "\x30\x85\xcf\x2b\x98\xe9\xaa\x34"
+ "\x62\xb5\x23\x9e\xb7\xa6\xd4\xe0"
+ "\xb4\x58\x18\x8c\x4d\xde\x4d\x01"
+ "\x83\x89\x24\xca\xfb\x11\xd4\x82"
+ "\x30\x7a\x81\x35\xa0\xb4\xd4\xb6"
+ "\x84\xea\x47\x91\x8c\x19\x86\x25"
+ "\xa6\x06\x8d\x78\xe6\xed\x87\xeb"
+ "\xda\xea\x73\x7c\xbf\x66\xb8\x72"
+ "\xe3\x0a\xb8\x0c\xcb\x1a\x73\xf1"
+ "\xa7\xca\x0a\xde\x57\x2b\xbd\x2b"
+ "\xeb\x8b\x24\x38\x22\xd3\x0e\x1f"
+ "\x17\xa0\x84\x98\x31\x77\xfd\x34"
+ "\x6a\x4e\x3d\x84\x4c\x0e\xfb\xed"
+ "\xc8\x2a\x51\xfa\xd8\x73\x21\x8a"
+ "\xdb\xb5\xfe\x1f\xee\xc4\xe8\x65"
+ "\x54\x84\xdd\x96\x6d\xfd\xd3\x31"
+ "\x77\x36\x52\x6b\x80\x4f\x9e\xb4"
+ "\xa2\x55\xbf\x66\x41\x49\x4e\x87"
+ "\xa7\x0c\xca\xe7\xa5\xc5\xf6\x6f"
+ "\x27\x56\xe2\x48\x22\xdd\x5f\x59"
+ "\x3c\xf1\x9f\x83\xe5\x2d\xfb\x71"
+ "\xad\xd1\xae\x1b\x20\x5c\x47\xb7"
+ "\x3b\xd3\x14\xce\x81\x42\xb1\x0a"
+ "\xf0\x49\xfa\xc2\xe7\x86\xbf\xcd"
+ "\xb0\x95\x9f\x8f\x79\x41\x54",
+ .ctext = "\xf6\x57\x51\xc4\x25\x61\x2d\xfa"
+ "\xd6\xd9\x3f\x9a\x81\x51\xdd\x8e"
+ "\x3d\xe7\xaa\x2d\xb1\xda\xc8\xa6"
+ "\x9d\xaa\x3c\xab\x62\xf2\x80\xc3"
+ "\x2c\xe7\x58\x72\x1d\x44\xc5\x28"
+ "\x7f\xb4\xf9\xbc\x9c\xb2\xab\x8e"
+ "\xfa\xd1\x4d\x72\xd9\x79\xf5\xa0"
+ "\x24\x3e\x90\x25\x31\x14\x38\x45"
+ "\x59\xc8\xf6\xe2\xc6\xf6\xc1\xa7"
+ "\xb2\xf8\xa7\xa9\x2b\x6f\x12\x3a"
+ "\xb0\x81\xa4\x08\x57\x59\xb1\x56"
+ "\x4c\x8f\x18\x55\x33\x5f\xd6\x6a"
+ "\xc6\xa0\x4b\xd6\x6b\x64\x3e\x9e"
+ "\xfd\x66\x16\xe2\xdb\xeb\x5f\xb3"
+ "\x50\x50\x3e\xde\x8d\x72\x76\x01"
+ "\xbe\xcc\xc9\x52\x09\x2d\x8d\xe7"
+ "\xd6\xc3\x66\xdb\x36\x08\xd1\x77"
+ "\xc8\x73\x46\x26\x24\x29\xbf\x68"
+ "\x2d\x2a\x99\x43\x56\x55\xe4\x93"
+ "\xaf\xae\x4d\xe7\x55\x4a\xc0\x45"
+ "\x26\xeb\x3b\x12\x90\x7c\xdc\xd1"
+ "\xd5\x6f\x0a\xd0\xa9\xd7\x4b\x89"
+ "\x0b\x07\xd8\x86\xad\xa1\xc4\x69"
+ "\x1f\x5e\x8b\xc4\x9e\x91\x41\x25"
+ "\x56\x98\x69\x78\x3a\x9e\xae\x91"
+ "\xd8\xd9\xfa\xfb\xff\x81\x25\x09"
+ "\xfc\xed\x2d\x87\xbc\x04\x62\x97"
+ "\x35\xe1\x26\xc2\x46\x1c\xcf\xd7"
+ "\x14\xed\x02\x09\xa5\xb2\xb6\xaa"
+ "\x27\x4e\x61\xb3\x71\x6b\x47\x16"
+ "\xb7\xe8\xd4\xaf\x52\xeb\x6a\x6b"
+ "\xdb\x4c\x65\x21\x9e\x1c\x36",
+ .klen = 32,
+ .len = 255,
+ },
+ {
+ .key = "\xd3\x81\x72\x18\x23\xff\x6f\x4a"
+ "\x25\x74\x29\x0d\x51\x8a\x0e\x13"
+ "\xc1\x53\x5d\x30\x8d\xee\x75\x0d"
+ "\x14\xd6\x69\xc9\x15\xa9\x0c\x60",
+ .iv = "\x65\x9b\xd4\xa8\x7d\x29\x1d\xf4"
+ "\xc4\xd6\x9b\x6a\x28\xab\x64\xe2"
+ "\x62\x81\x97\xc5\x81\xaa\xf9\x44"
+ "\xc1\x72\x59\x82\xaf\x16\xc8\x2c",
+ .ptext = "\xc7\x6b\x52\x6a\x10\xf0\xcc\x09"
+ "\xc1\x12\x1d\x6d\x21\xa6\x78\xf5"
+ "\x05\xa3\x69\x60\x91\x36\x98\x57"
+ "\xba\x0c\x14\xcc\xf3\x2d\x73\x03"
+ "\xc6\xb2\x5f\xc8\x16\x27\x37\x5d"
+ "\xd0\x0b\x87\xb2\x50\x94\x7b\x58"
+ "\x04\xf4\xe0\x7f\x6e\x57\x8e\xc9"
+ "\x41\x84\xc1\xb1\x7e\x4b\x91\x12"
+ "\x3a\x8b\x5d\x50\x82\x7b\xcb\xd9"
+ "\x9a\xd9\x4e\x18\x06\x23\x9e\xd4"
+ "\xa5\x20\x98\xef\xb5\xda\xe5\xc0"
+ "\x8a\x6a\x83\x77\x15\x84\x1e\xae"
+ "\x78\x94\x9d\xdf\xb7\xd1\xea\x67"
+ "\xaa\xb0\x14\x15\xfa\x67\x21\x84"
+ "\xd3\x41\x2a\xce\xba\x4b\x4a\xe8"
+ "\x95\x62\xa9\x55\xf0\x80\xad\xbd"
+ "\xab\xaf\xdd\x4f\xa5\x7c\x13\x36"
+ "\xed\x5e\x4f\x72\xad\x4b\xf1\xd0"
+ "\x88\x4e\xec\x2c\x88\x10\x5e\xea"
+ "\x12\xc0\x16\x01\x29\xa3\xa0\x55"
+ "\xaa\x68\xf3\xe9\x9d\x3b\x0d\x3b"
+ "\x6d\xec\xf8\xa0\x2d\xf0\x90\x8d"
+ "\x1c\xe2\x88\xd4\x24\x71\xf9\xb3"
+ "\xc1\x9f\xc5\xd6\x76\x70\xc5\x2e"
+ "\x9c\xac\xdb\x90\xbd\x83\x72\xba"
+ "\x6e\xb5\xa5\x53\x83\xa9\xa5\xbf"
+ "\x7d\x06\x0e\x3c\x2a\xd2\x04\xb5"
+ "\x1e\x19\x38\x09\x16\xd2\x82\x1f"
+ "\x75\x18\x56\xb8\x96\x0b\xa6\xf9"
+ "\xcf\x62\xd9\x32\x5d\xa9\xd7\x1d"
+ "\xec\xe4\xdf\x1b\xbe\xf1\x36\xee"
+ "\xe3\x7b\xb5\x2f\xee\xf8\x53\x3d"
+ "\x6a\xb7\x70\xa9\xfc\x9c\x57\x25"
+ "\xf2\x89\x10\xd3\xb8\xa8\x8c\x30"
+ "\xae\x23\x4f\x0e\x13\x66\x4f\xe1"
+ "\xb6\xc0\xe4\xf8\xef\x93\xbd\x6e"
+ "\x15\x85\x6b\xe3\x60\x81\x1d\x68"
+ "\xd7\x31\x87\x89\x09\xab\xd5\x96"
+ "\x1d\xf3\x6d\x67\x80\xca\x07\x31"
+ "\x5d\xa7\xe4\xfb\x3e\xf2\x9b\x33"
+ "\x52\x18\xc8\x30\xfe\x2d\xca\x1e"
+ "\x79\x92\x7a\x60\x5c\xb6\x58\x87"
+ "\xa4\x36\xa2\x67\x92\x8b\xa4\xb7"
+ "\xf1\x86\xdf\xdc\xc0\x7e\x8f\x63"
+ "\xd2\xa2\xdc\x78\xeb\x4f\xd8\x96"
+ "\x47\xca\xb8\x91\xf9\xf7\x94\x21"
+ "\x5f\x9a\x9f\x5b\xb8\x40\x41\x4b"
+ "\x66\x69\x6a\x72\xd0\xcb\x70\xb7"
+ "\x93\xb5\x37\x96\x05\x37\x4f\xe5"
+ "\x8c\xa7\x5a\x4e\x8b\xb7\x84\xea"
+ "\xc7\xfc\x19\x6e\x1f\x5a\xa1\xac"
+ "\x18\x7d\x52\x3b\xb3\x34\x62\x99"
+ "\xe4\x9e\x31\x04\x3f\xc0\x8d\x84"
+ "\x17\x7c\x25\x48\x52\x67\x11\x27"
+ "\x67\xbb\x5a\x85\xca\x56\xb2\x5c"
+ "\xe6\xec\xd5\x96\x3d\x15\xfc\xfb"
+ "\x22\x25\xf4\x13\xe5\x93\x4b\x9a"
+ "\x77\xf1\x52\x18\xfa\x16\x5e\x49"
+ "\x03\x45\xa8\x08\xfa\xb3\x41\x92"
+ "\x79\x50\x33\xca\xd0\xd7\x42\x55"
+ "\xc3\x9a\x0c\x4e\xd9\xa4\x3c\x86"
+ "\x80\x9f\x53\xd1\xa4\x2e\xd1\xbc"
+ "\xf1\x54\x6e\x93\xa4\x65\x99\x8e"
+ "\xdf\x29\xc0\x64\x63\x07\xbb\xea",
+ .ctext = "\x9f\x72\x87\xc7\x17\xfb\x20\x15"
+ "\x65\xb3\x55\xa8\x1c\x8e\x52\x32"
+ "\xb1\x82\x8d\xbf\xb5\x9f\x10\x0a"
+ "\xe8\x0c\x70\x62\xef\x89\xb6\x1f"
+ "\x73\xcc\xe4\xcc\x7a\x3a\x75\x4a"
+ "\x26\xe7\xf5\xd7\x7b\x17\x39\x2d"
+ "\xd2\x27\x6e\xf9\x2f\x9e\xe2\xf6"
+ "\xfa\x16\xc2\xf2\x49\x26\xa7\x5b"
+ "\xe7\xca\x25\x0e\x45\xa0\x34\xc2"
+ "\x9a\x37\x79\x7e\x7c\x58\x18\x94"
+ "\x10\xa8\x7c\x48\xa9\xd7\x63\x89"
+ "\x9e\x61\x4d\x26\x34\xd9\xf0\xb1"
+ "\x2d\x17\x2c\x6f\x7c\x35\x0e\xbe"
+ "\x77\x71\x7c\x17\x5b\xab\x70\xdb"
+ "\x2f\x54\x0f\xa9\xc8\xf4\xf5\xab"
+ "\x52\x04\x3a\xb8\x03\xa7\xfd\x57"
+ "\x45\x5e\xbc\x77\xe1\xee\x79\x8c"
+ "\x58\x7b\x1f\xf7\x75\xde\x68\x17"
+ "\x98\x85\x8a\x18\x5c\xd2\x39\x78"
+ "\x7a\x6f\x26\x6e\xe1\x13\x91\xdd"
+ "\xdf\x0e\x6e\x67\xcc\x51\x53\xd8"
+ "\x17\x5e\xce\xa7\xe4\xaf\xfa\xf3"
+ "\x4f\x9f\x01\x9b\x04\xe7\xfc\xf9"
+ "\x6a\xdc\x1d\x0c\x9a\xaa\x3a\x7a"
+ "\x73\x03\xdf\xbf\x3b\x82\xbe\xb0"
+ "\xb4\xa4\xcf\x07\xd7\xde\x71\x25"
+ "\xc5\x10\xee\x0a\x15\x96\x8b\x4f"
+ "\xfe\xb8\x28\xbd\x4a\xcd\xeb\x9f"
+ "\x5d\x00\xc1\xee\xe8\x16\x44\xec"
+ "\xe9\x7b\xd6\x85\x17\x29\xcf\x58"
+ "\x20\xab\xf7\xce\x6b\xe7\x71\x7d"
+ "\x4f\xa8\xb0\xe9\x7d\x70\xd6\x0b"
+ "\x2e\x20\xb1\x1a\x63\x37\xaa\x2c"
+ "\x94\xee\xd5\xf6\x58\x2a\xf4\x7a"
+ "\x4c\xba\xf5\xe9\x3c\x6f\x95\x13"
+ "\x5f\x96\x81\x5b\xb5\x62\xf2\xd7"
+ "\x8d\xbe\xa1\x31\x51\xe6\xfe\xc9"
+ "\x07\x7d\x0f\x00\x3a\x66\x8c\x4b"
+ "\x94\xaa\xe5\x56\xde\xcd\x74\xa7"
+ "\x48\x67\x6f\xed\xc9\x6a\xef\xaf"
+ "\x9a\xb7\xae\x60\xfa\xc0\x37\x39"
+ "\xa5\x25\xe5\x22\xea\x82\x55\x68"
+ "\x3e\x30\xc3\x5a\xb6\x29\x73\x7a"
+ "\xb6\xfb\x34\xee\x51\x7c\x54\xe5"
+ "\x01\x4d\x72\x25\x32\x4a\xa3\x68"
+ "\x80\x9a\x89\xc5\x11\x66\x4c\x8c"
+ "\x44\x50\xbe\xd7\xa0\xee\xa6\xbb"
+ "\x92\x0c\xe6\xd7\x83\x51\xb1\x69"
+ "\x63\x40\xf3\xf4\x92\x84\xc4\x38"
+ "\x29\xfb\xb4\x84\xa0\x19\x75\x16"
+ "\x60\xbf\x0a\x9c\x89\xee\xad\xb4"
+ "\x43\xf9\x71\x39\x45\x7c\x24\x83"
+ "\x30\xbb\xee\x28\xb0\x86\x7b\xec"
+ "\x93\xc1\xbf\xb9\x97\x1b\x96\xef"
+ "\xee\x58\x35\x61\x12\x19\xda\x25"
+ "\x77\xe5\x80\x1a\x31\x27\x9b\xe4"
+ "\xda\x8b\x7e\x51\x4d\xcb\x01\x19"
+ "\x4f\xdc\x92\x1a\x17\xd5\x6b\xf4"
+ "\x50\xe3\x06\xe4\x76\x9f\x65\x00"
+ "\xbd\x7a\xe2\x64\x26\xf2\xe4\x7e"
+ "\x40\xf2\x80\xab\x62\xd5\xef\x23"
+ "\x8b\xfb\x6f\x24\x6e\x9b\x66\x0e"
+ "\xf4\x1c\x24\x1e\x1d\x26\x95\x09"
+ "\x94\x3c\xb2\xb6\x02\xa7\xd9\x9a",
+ .klen = 32,
+ .len = 512,
+ },
+
+};
+
#endif /* _CRYPTO_TESTMGR_H */
--
2.36.1.124.g0e6072fb45-goog


2022-05-23 07:36:28

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v9 7/9] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL

Add hardware accelerated version of POLYVAL for x86-64 CPUs with
PCLMULQDQ support.

This implementation is accelerated using PCLMULQDQ instructions to
perform the finite field computations. For added efficiency, 8 blocks
of the message are processed simultaneously by precomputing the first
8 powers of the key.

Schoolbook multiplication is used instead of Karatsuba multiplication
because it was found to be slightly faster on x86-64 machines.
Montgomery reduction must be used instead of Barrett reduction due to
the difference in modulus between POLYVAL's field and other finite
fields.

More information on POLYVAL can be found in the HCTR2 paper:
"Length-preserving encryption with HCTR2":
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
---
arch/x86/crypto/Makefile | 3 +
arch/x86/crypto/polyval-clmulni_asm.S | 321 +++++++++++++++++++++++++
arch/x86/crypto/polyval-clmulni_glue.c | 203 ++++++++++++++++
crypto/Kconfig | 9 +
crypto/polyval-generic.c | 40 +++
include/crypto/polyval.h | 5 +
6 files changed, 581 insertions(+)
create mode 100644 arch/x86/crypto/polyval-clmulni_asm.S
create mode 100644 arch/x86/crypto/polyval-clmulni_glue.c

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index 2831685adf6f..b9847152acd8 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -69,6 +69,9 @@ libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o
obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o

+obj-$(CONFIG_CRYPTO_POLYVAL_CLMUL_NI) += polyval-clmulni.o
+polyval-clmulni-y := polyval-clmulni_asm.o polyval-clmulni_glue.o
+
obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o
crc32c-intel-y := crc32c-intel_glue.o
crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o
diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S
new file mode 100644
index 000000000000..a6ebe4e7dd2b
--- /dev/null
+++ b/arch/x86/crypto/polyval-clmulni_asm.S
@@ -0,0 +1,321 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2021 Google LLC
+ */
+/*
+ * This is an efficient implementation of POLYVAL using intel PCLMULQDQ-NI
+ * instructions. It works on 8 blocks at a time, by precomputing the first 8
+ * keys powers h^8, ..., h^1 in the POLYVAL finite field. This precomputation
+ * allows us to split finite field multiplication into two steps.
+ *
+ * In the first step, we consider h^i, m_i as normal polynomials of degree less
+ * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication
+ * is simply polynomial multiplication.
+ *
+ * In the second step, we compute the reduction of p(x) modulo the finite field
+ * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where
+ * multiplication is finite field multiplication. The advantage is that the
+ * two-step process only requires 1 finite field reduction for every 8
+ * polynomial multiplications. Further parallelism is gained by interleaving the
+ * multiplications and polynomial reductions.
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+#define STRIDE_BLOCKS 8
+
+#define GSTAR %xmm7
+#define PL %xmm8
+#define PH %xmm9
+#define TMP_XMM %xmm11
+#define LO %xmm12
+#define HI %xmm13
+#define MI %xmm14
+#define SUM %xmm15
+
+#define KEY_POWERS %rdi
+#define MSG %rsi
+#define BLOCKS_LEFT %rdx
+#define ACCUMULATOR %rcx
+#define TMP %rax
+
+.section .rodata.cst16.gstar, "aM", @progbits, 16
+.align 16
+
+.Lgstar:
+ .quad 0xc200000000000000, 0xc200000000000000
+
+.text
+
+/*
+ * Performs schoolbook1_iteration on two lists of 128-bit polynomials of length
+ * count pointed to by MSG and KEY_POWERS.
+ */
+.macro schoolbook1 count
+ .set i, 0
+ .rept (\count)
+ schoolbook1_iteration i 0
+ .set i, (i +1)
+ .endr
+.endm
+
+/*
+ * Computes the product of two 128-bit polynomials at the memory locations
+ * specified by (MSG + 16*i) and (KEY_POWERS + 16*i) and XORs the components of
+ * the 256-bit product into LO, MI, HI.
+ *
+ * Given:
+ * X = [X_1 : X_0]
+ * Y = [Y_1 : Y_0]
+ *
+ * We compute:
+ * LO += X_0 * Y_0
+ * MI += X_0 * Y_1 + X_1 * Y_0
+ * HI += X_1 * Y_1
+ *
+ * Later, the 256-bit result can be extracted as:
+ * [HI_1 : HI_0 + MI_1 : LO_1 + MI_0 : LO_0]
+ * This step is done when computing the polynomial reduction for efficiency
+ * reasons.
+ *
+ * If xor_sum == 1, then also XOR the value of SUM into m_0. This avoids an
+ * extra multiplication of SUM and h^8.
+ */
+.macro schoolbook1_iteration i xor_sum
+ movups (16*\i)(MSG), %xmm0
+ .if (\i == 0 && \xor_sum == 1)
+ pxor SUM, %xmm0
+ .endif
+ vpclmulqdq $0x01, (16*\i)(KEY_POWERS), %xmm0, %xmm2
+ vpclmulqdq $0x00, (16*\i)(KEY_POWERS), %xmm0, %xmm1
+ vpclmulqdq $0x10, (16*\i)(KEY_POWERS), %xmm0, %xmm3
+ vpclmulqdq $0x11, (16*\i)(KEY_POWERS), %xmm0, %xmm4
+ vpxor %xmm2, MI, MI
+ vpxor %xmm1, LO, LO
+ vpxor %xmm4, HI, HI
+ vpxor %xmm3, MI, MI
+.endm
+
+/*
+ * Performs the same computation as schoolbook1_iteration, except we expect the
+ * arguments to already be loaded into xmm0 and xmm1 and we set the result
+ * registers LO, MI, and HI directly rather than XOR'ing into them.
+ */
+.macro schoolbook1_noload
+ vpclmulqdq $0x01, %xmm0, %xmm1, MI
+ vpclmulqdq $0x10, %xmm0, %xmm1, %xmm2
+ vpclmulqdq $0x00, %xmm0, %xmm1, LO
+ vpclmulqdq $0x11, %xmm0, %xmm1, HI
+ vpxor %xmm2, MI, MI
+.endm
+
+/*
+ * Computes the 256-bit polynomial represented by LO, HI, MI. Stores
+ * the result in PL, PH.
+ * [PH : PL] = [HI_1 : HI_0 + MI_1 : LO_1 + MI_0 : LO_0]
+ */
+.macro schoolbook2
+ vpslldq $8, MI, PL
+ vpsrldq $8, MI, PH
+ pxor LO, PL
+ pxor HI, PH
+.endm
+
+/*
+ * Computes the 128-bit reduction of PH : PL. Stores the result in dest.
+ *
+ * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) =
+ * x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * We have a 256-bit polynomial PH : PL = P_3 : P_2 : P_1 : P_0 that is the
+ * product of two 128-bit polynomials in Montgomery form. We need to reduce it
+ * mod g(x). Also, since polynomials in Montgomery form have an "extra" factor
+ * of x^128, this product has two extra factors of x^128. To get it back into
+ * Montgomery form, we need to remove one of these factors by dividing by x^128.
+ *
+ * To accomplish both of these goals, we add multiples of g(x) that cancel out
+ * the low 128 bits P_1 : P_0, leaving just the high 128 bits. Since the low
+ * bits are zero, the polynomial division by x^128 can be done by right shifting.
+ *
+ * Since the only nonzero term in the low 64 bits of g(x) is the constant term,
+ * the multiple of g(x) needed to cancel out P_0 is P_0 * g(x). The CPU can
+ * only do 64x64 bit multiplications, so split P_0 * g(x) into x^128 * P_0 +
+ * x^64 * g*(x) * P_0 + P_0, where g*(x) is bits 64-127 of g(x). Adding this to
+ * the original polynomial gives P_3 : P_2 + P_0 + T_1 : P_1 + T_0 : 0, where T
+ * = T_1 : T_0 = g*(x) * P_0. Thus, bits 0-63 got "folded" into bits 64-191.
+ *
+ * Repeating this same process on the next 64 bits "folds" bits 64-127 into bits
+ * 128-255, giving the answer in bits 128-255. This time, we need to cancel P_1
+ * + T_0 in bits 64-127. The multiple of g(x) required is (P_1 + T_0) * g(x) *
+ * x^64. Adding this to our previous computation gives P_3 + P_1 + T_0 + V_1 :
+ * P_2 + P_0 + T_1 + V_0 : 0 : 0, where V = V_1 : V_0 = g*(x) * (P_1 + T_0).
+ *
+ * So our final computation is:
+ * T = T_1 : T_0 = g*(x) * P_0
+ * V = V_1 : V_0 = g*(x) * (P_1 + T_0)
+ * p(x) / x^{128} mod g(x) = P_3 + P_1 + T_0 + V_1 : P_2 + P_0 + T_1 + V_0
+ *
+ * The implementation below saves a XOR instruction by computing P_1 + T_0 : P_0
+ * + T_1 and XORing into dest, rather than separately XORing P_1 : P_0 and T_0 :
+ * T_1 into dest. This allows us to reuse P_1 + T_0 when computing V.
+ */
+.macro montgomery_reduction dest
+ vpclmulqdq $0x00, PL, GSTAR, TMP_XMM # TMP_XMM = T_1 : T_0 = P_0 * g*(x)
+ pshufd $0b01001110, TMP_XMM, TMP_XMM # TMP_XMM = T_0 : T_1
+ pxor PL, TMP_XMM # TMP_XMM = P_1 + T_0 : P_0 + T_1
+ pxor TMP_XMM, PH # PH = P_3 + P_1 + T_0 : P_2 + P_0 + T_1
+ pclmulqdq $0x11, GSTAR, TMP_XMM # TMP_XMM = V_1 : V_0 = V = [(P_1 + T_0) * g*(x)]
+ vpxor TMP_XMM, PH, \dest
+.endm
+
+/*
+ * Compute schoolbook multiplication for 8 blocks
+ * m_0h^8 + ... + m_7h^1
+ *
+ * If reduce is set, also computes the montgomery reduction of the
+ * previous full_stride call and XORs with the first message block.
+ * (m_0 + REDUCE(PL, PH))h^8 + ... + m_7h^1.
+ * I.e., the first multiplication uses m_0 + REDUCE(PL, PH) instead of m_0.
+ */
+.macro full_stride reduce
+ pxor LO, LO
+ pxor HI, HI
+ pxor MI, MI
+
+ schoolbook1_iteration 7 0
+ .if \reduce
+ vpclmulqdq $0x00, PL, GSTAR, TMP_XMM
+ .endif
+
+ schoolbook1_iteration 6 0
+ .if \reduce
+ pshufd $0b01001110, TMP_XMM, TMP_XMM
+ .endif
+
+ schoolbook1_iteration 5 0
+ .if \reduce
+ pxor PL, TMP_XMM
+ .endif
+
+ schoolbook1_iteration 4 0
+ .if \reduce
+ pxor TMP_XMM, PH
+ .endif
+
+ schoolbook1_iteration 3 0
+ .if \reduce
+ pclmulqdq $0x11, GSTAR, TMP_XMM
+ .endif
+
+ schoolbook1_iteration 2 0
+ .if \reduce
+ vpxor TMP_XMM, PH, SUM
+ .endif
+
+ schoolbook1_iteration 1 0
+
+ schoolbook1_iteration 0 1
+
+ addq $(8*16), MSG
+ schoolbook2
+.endm
+
+/*
+ * Process BLOCKS_LEFT blocks, where 0 < BLOCKS_LEFT < STRIDE_BLOCKS
+ */
+.macro partial_stride
+ mov BLOCKS_LEFT, TMP
+ shlq $4, TMP
+ addq $(16*STRIDE_BLOCKS), KEY_POWERS
+ subq TMP, KEY_POWERS
+
+ movups (MSG), %xmm0
+ pxor SUM, %xmm0
+ movaps (KEY_POWERS), %xmm1
+ schoolbook1_noload
+ dec BLOCKS_LEFT
+ addq $16, MSG
+ addq $16, KEY_POWERS
+
+ test $4, BLOCKS_LEFT
+ jz .Lpartial4BlocksDone
+ schoolbook1 4
+ addq $(4*16), MSG
+ addq $(4*16), KEY_POWERS
+.Lpartial4BlocksDone:
+ test $2, BLOCKS_LEFT
+ jz .Lpartial2BlocksDone
+ schoolbook1 2
+ addq $(2*16), MSG
+ addq $(2*16), KEY_POWERS
+.Lpartial2BlocksDone:
+ test $1, BLOCKS_LEFT
+ jz .LpartialDone
+ schoolbook1 1
+.LpartialDone:
+ schoolbook2
+ montgomery_reduction SUM
+.endm
+
+/*
+ * Perform montgomery multiplication in GF(2^128) and store result in op1.
+ *
+ * Computes op1*op2*x^{-128} mod x^128 + x^127 + x^126 + x^121 + 1
+ * If op1, op2 are in montgomery form, this computes the montgomery
+ * form of op1*op2.
+ *
+ * void clmul_polyval_mul(u8 *op1, const u8 *op2);
+ */
+SYM_FUNC_START(clmul_polyval_mul)
+ FRAME_BEGIN
+ vmovdqa .Lgstar(%rip), GSTAR
+ movups (%rdi), %xmm0
+ movups (%rsi), %xmm1
+ schoolbook1_noload
+ schoolbook2
+ montgomery_reduction SUM
+ movups SUM, (%rdi)
+ FRAME_END
+ RET
+SYM_FUNC_END(clmul_polyval_mul)
+
+/*
+ * Perform polynomial evaluation as specified by POLYVAL. This computes:
+ * h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
+ * where n=nblocks, h is the hash key, and m_i are the message blocks.
+ *
+ * rdi - pointer to precomputed key powers h^8 ... h^1
+ * rsi - pointer to message blocks
+ * rdx - number of blocks to hash
+ * rcx - pointer to the accumulator
+ *
+ * void clmul_polyval_update(const struct polyval_tfm_ctx *keys,
+ * const u8 *in, size_t nblocks, u8 *accumulator);
+ */
+SYM_FUNC_START(clmul_polyval_update)
+ FRAME_BEGIN
+ vmovdqa .Lgstar(%rip), GSTAR
+ movups (ACCUMULATOR), SUM
+ subq $STRIDE_BLOCKS, BLOCKS_LEFT
+ js .LstrideLoopExit
+ full_stride 0
+ subq $STRIDE_BLOCKS, BLOCKS_LEFT
+ js .LstrideLoopExitReduce
+.LstrideLoop:
+ full_stride 1
+ subq $STRIDE_BLOCKS, BLOCKS_LEFT
+ jns .LstrideLoop
+.LstrideLoopExitReduce:
+ montgomery_reduction SUM
+.LstrideLoopExit:
+ add $STRIDE_BLOCKS, BLOCKS_LEFT
+ jz .LskipPartial
+ partial_stride
+.LskipPartial:
+ movups SUM, (ACCUMULATOR)
+ FRAME_END
+ RET
+SYM_FUNC_END(clmul_polyval_update)
diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c
new file mode 100644
index 000000000000..b7664d018851
--- /dev/null
+++ b/arch/x86/crypto/polyval-clmulni_glue.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Glue code for POLYVAL using PCMULQDQ-NI
+ *
+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <[email protected]>
+ * Copyright (c) 2009 Intel Corp.
+ * Author: Huang Ying <[email protected]>
+ * Copyright 2021 Google LLC
+ */
+
+/*
+ * Glue code based on ghash-clmulni-intel_glue.c.
+ *
+ * This implementation of POLYVAL uses montgomery multiplication
+ * accelerated by PCLMULQDQ-NI to implement the finite field
+ * operations.
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/polyval.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <asm/cpu_device_id.h>
+#include <asm/simd.h>
+
+#define NUM_KEY_POWERS 8
+
+struct polyval_tfm_ctx {
+ /*
+ * These powers must be in the order h^8, ..., h^1.
+ */
+ u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
+};
+
+struct polyval_desc_ctx {
+ u8 buffer[POLYVAL_BLOCK_SIZE];
+ u32 bytes;
+};
+
+asmlinkage void clmul_polyval_update(const struct polyval_tfm_ctx *keys,
+ const u8 *in, size_t nblocks, u8 *accumulator);
+asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);
+
+static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
+ const u8 *in, size_t nblocks, u8 *accumulator)
+{
+ if (likely(crypto_simd_usable())) {
+ kernel_fpu_begin();
+ clmul_polyval_update(keys, in, nblocks, accumulator);
+ kernel_fpu_end();
+ } else {
+ polyval_update_non4k(keys->key_powers[NUM_KEY_POWERS-1], in,
+ nblocks, accumulator);
+ }
+}
+
+static void internal_polyval_mul(u8 *op1, const u8 *op2)
+{
+ if (likely(crypto_simd_usable())) {
+ kernel_fpu_begin();
+ clmul_polyval_mul(op1, op2);
+ kernel_fpu_end();
+ } else {
+ polyval_mul_non4k(op1, op2);
+ }
+}
+
+static int polyval_x86_setkey(struct crypto_shash *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+ int i;
+
+ if (keylen != POLYVAL_BLOCK_SIZE)
+ return -EINVAL;
+
+ memcpy(tctx->key_powers[NUM_KEY_POWERS-1], key, POLYVAL_BLOCK_SIZE);
+
+ for (i = NUM_KEY_POWERS-2; i >= 0; i--) {
+ memcpy(tctx->key_powers[i], key, POLYVAL_BLOCK_SIZE);
+ internal_polyval_mul(tctx->key_powers[i],
+ tctx->key_powers[i+1]);
+ }
+
+ return 0;
+}
+
+static int polyval_x86_init(struct shash_desc *desc)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+ memset(dctx, 0, sizeof(*dctx));
+
+ return 0;
+}
+
+static int polyval_x86_update(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+ const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ u8 *pos;
+ unsigned int nblocks;
+ unsigned int n;
+
+ if (dctx->bytes) {
+ n = min(srclen, dctx->bytes);
+ pos = dctx->buffer + POLYVAL_BLOCK_SIZE - dctx->bytes;
+
+ dctx->bytes -= n;
+ srclen -= n;
+
+ while (n--)
+ *pos++ ^= *src++;
+
+ if (!dctx->bytes)
+ internal_polyval_mul(dctx->buffer,
+ tctx->key_powers[NUM_KEY_POWERS-1]);
+ }
+
+ while (srclen >= POLYVAL_BLOCK_SIZE) {
+ /* Allow rescheduling every 4K bytes. */
+ nblocks = min(srclen, 4096U) / POLYVAL_BLOCK_SIZE;
+ internal_polyval_update(tctx, src, nblocks, dctx->buffer);
+ srclen -= nblocks * POLYVAL_BLOCK_SIZE;
+ src += nblocks * POLYVAL_BLOCK_SIZE;
+ }
+
+ if (srclen) {
+ dctx->bytes = POLYVAL_BLOCK_SIZE - srclen;
+ pos = dctx->buffer;
+ while (srclen--)
+ *pos++ ^= *src++;
+ }
+
+ return 0;
+}
+
+static int polyval_x86_final(struct shash_desc *desc, u8 *dst)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+ const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+
+ if (dctx->bytes) {
+ internal_polyval_mul(dctx->buffer,
+ tctx->key_powers[NUM_KEY_POWERS-1]);
+ }
+
+ memcpy(dst, dctx->buffer, POLYVAL_BLOCK_SIZE);
+
+ return 0;
+}
+
+static struct shash_alg polyval_alg = {
+ .digestsize = POLYVAL_DIGEST_SIZE,
+ .init = polyval_x86_init,
+ .update = polyval_x86_update,
+ .final = polyval_x86_final,
+ .setkey = polyval_x86_setkey,
+ .descsize = sizeof(struct polyval_desc_ctx),
+ .base = {
+ .cra_name = "polyval",
+ .cra_driver_name = "polyval-clmulni",
+ .cra_priority = 200,
+ .cra_blocksize = POLYVAL_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
+ .cra_module = THIS_MODULE,
+ },
+};
+
+__maybe_unused static const struct x86_cpu_id pcmul_cpu_id[] = {
+ X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL),
+ {}
+};
+MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id);
+
+static int __init polyval_clmulni_mod_init(void)
+{
+ if (!x86_match_cpu(pcmul_cpu_id))
+ return -ENODEV;
+
+ if (!boot_cpu_has(X86_FEATURE_AVX))
+ return -ENODEV;
+
+ return crypto_register_shash(&polyval_alg);
+}
+
+static void __exit polyval_clmulni_mod_exit(void)
+{
+ crypto_unregister_shash(&polyval_alg);
+}
+
+module_init(polyval_clmulni_mod_init);
+module_exit(polyval_clmulni_mod_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("POLYVAL hash function accelerated by PCLMULQDQ-NI");
+MODULE_ALIAS_CRYPTO("polyval");
+MODULE_ALIAS_CRYPTO("polyval-clmulni");
diff --git a/crypto/Kconfig b/crypto/Kconfig
index aa06af0e0ebe..e5ccc43b6775 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -787,6 +787,15 @@ config CRYPTO_POLYVAL
POLYVAL is the hash function used in HCTR2. It is not a general-purpose
cryptographic hash function.

+config CRYPTO_POLYVAL_CLMUL_NI
+ tristate "POLYVAL hash function (CLMUL-NI accelerated)"
+ depends on X86 && 64BIT
+ select CRYPTO_POLYVAL
+ help
+ This is the x86_64 CLMUL-NI accelerated implementation of POLYVAL. It is
+ used to efficiently implement HCTR2 on x86-64 processors that support
+ carry-less multiplication instructions.
+
config CRYPTO_POLY1305
tristate "Poly1305 authenticator algorithm"
select CRYPTO_HASH
diff --git a/crypto/polyval-generic.c b/crypto/polyval-generic.c
index bf2b03b7bfc0..16bfa6925b31 100644
--- a/crypto/polyval-generic.c
+++ b/crypto/polyval-generic.c
@@ -76,6 +76,46 @@ static void copy_and_reverse(u8 dst[POLYVAL_BLOCK_SIZE],
put_unaligned(swab64(b), (u64 *)&dst[0]);
}

+/*
+ * Performs multiplication in the POLYVAL field using the GHASH field as a
+ * subroutine. This function is used as a fallback for hardware accelerated
+ * implementations when simd registers are unavailable.
+ *
+ * Note: This function is not used for polyval-generic, instead we use the 4k
+ * lookup table implementation for finite field multiplication.
+ */
+void polyval_mul_non4k(u8 *op1, const u8 *op2)
+{
+ be128 a, b;
+
+ // Assume one argument is in Montgomery form and one is not.
+ copy_and_reverse((u8 *)&a, op1);
+ copy_and_reverse((u8 *)&b, op2);
+ gf128mul_x_lle(&a, &a);
+ gf128mul_lle(&a, &b);
+ copy_and_reverse(op1, (u8 *)&a);
+}
+EXPORT_SYMBOL_GPL(polyval_mul_non4k);
+
+/*
+ * Perform a POLYVAL update using non4k multiplication. This function is used
+ * as a fallback for hardware accelerated implementations when simd registers
+ * are unavailable.
+ *
+ * Note: This function is not used for polyval-generic, instead we use the 4k
+ * lookup table implementation of finite field multiplication.
+ */
+void polyval_update_non4k(const u8 *key, const u8 *in,
+ size_t nblocks, u8 *accumulator)
+{
+ while (nblocks--) {
+ crypto_xor(accumulator, in, POLYVAL_BLOCK_SIZE);
+ polyval_mul_non4k(accumulator, key);
+ in += POLYVAL_BLOCK_SIZE;
+ }
+}
+EXPORT_SYMBOL_GPL(polyval_update_non4k);
+
static int polyval_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
diff --git a/include/crypto/polyval.h b/include/crypto/polyval.h
index b14c38aa9166..1d630f371f77 100644
--- a/include/crypto/polyval.h
+++ b/include/crypto/polyval.h
@@ -14,4 +14,9 @@
#define POLYVAL_BLOCK_SIZE 16
#define POLYVAL_DIGEST_SIZE 16

+void polyval_mul_non4k(u8 *op1, const u8 *op2);
+
+void polyval_update_non4k(const u8 *key, const u8 *in,
+ size_t nblocks, u8 *accumulator);
+
#endif
--
2.36.1.124.g0e6072fb45-goog


2022-05-23 07:46:15

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v9 2/9] crypto: polyval - Add POLYVAL support

Add support for POLYVAL, an ε-Δ-universal hash function similar to
GHASH. This patch only uses POLYVAL as a component to implement HCTR2
mode. It should be noted that POLYVAL was originally specified for use
in AES-GCM-SIV (RFC 8452), but the kernel does not currently support
this mode.

POLYVAL is implemented as an shash algorithm. The implementation is
modified from ghash-generic.c.

For more information on POLYVAL see:
Length-preserving encryption with HCTR2:
https://eprint.iacr.org/2021/1441.pdf
AES-GCM-SIV: Nonce Misuse-Resistant Authenticated Encryption:
https://datatracker.ietf.org/doc/html/rfc8452

Signed-off-by: Nathan Huckleberry <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
---
crypto/Kconfig | 8 ++
crypto/Makefile | 1 +
crypto/polyval-generic.c | 205 +++++++++++++++++++++++++++++++++++++++
crypto/tcrypt.c | 4 +
crypto/testmgr.c | 6 ++
crypto/testmgr.h | 171 ++++++++++++++++++++++++++++++++
include/crypto/polyval.h | 17 ++++
7 files changed, 412 insertions(+)
create mode 100644 crypto/polyval-generic.c
create mode 100644 include/crypto/polyval.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 47752aaa16ff..00139845d76d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -768,6 +768,14 @@ config CRYPTO_GHASH
GHASH is the hash function used in GCM (Galois/Counter Mode).
It is not a general-purpose cryptographic hash function.

+config CRYPTO_POLYVAL
+ tristate
+ select CRYPTO_GF128MUL
+ select CRYPTO_HASH
+ help
+ POLYVAL is the hash function used in HCTR2. It is not a general-purpose
+ cryptographic hash function.
+
config CRYPTO_POLY1305
tristate "Poly1305 authenticator algorithm"
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 6b3fe3df1489..561f901a91d4 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -169,6 +169,7 @@ UBSAN_SANITIZE_jitterentropy.o = n
jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o
obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o
+obj-$(CONFIG_CRYPTO_POLYVAL) += polyval-generic.o
obj-$(CONFIG_CRYPTO_USER_API) += af_alg.o
obj-$(CONFIG_CRYPTO_USER_API_HASH) += algif_hash.o
obj-$(CONFIG_CRYPTO_USER_API_SKCIPHER) += algif_skcipher.o
diff --git a/crypto/polyval-generic.c b/crypto/polyval-generic.c
new file mode 100644
index 000000000000..bf2b03b7bfc0
--- /dev/null
+++ b/crypto/polyval-generic.c
@@ -0,0 +1,205 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * POLYVAL: hash function for HCTR2.
+ *
+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <[email protected]>
+ * Copyright (c) 2009 Intel Corp.
+ * Author: Huang Ying <[email protected]>
+ * Copyright 2021 Google LLC
+ */
+
+/*
+ * Code based on crypto/ghash-generic.c
+ *
+ * POLYVAL is a keyed hash function similar to GHASH. POLYVAL uses a different
+ * modulus for finite field multiplication which makes hardware accelerated
+ * implementations on little-endian machines faster. POLYVAL is used in the
+ * kernel to implement HCTR2, but was originally specified for AES-GCM-SIV
+ * (RFC 8452).
+ *
+ * For more information see:
+ * Length-preserving encryption with HCTR2:
+ * https://eprint.iacr.org/2021/1441.pdf
+ * AES-GCM-SIV: Nonce Misuse-Resistant Authenticated Encryption:
+ * https://datatracker.ietf.org/doc/html/rfc8452
+ *
+ * Like GHASH, POLYVAL is not a cryptographic hash function and should
+ * not be used outside of crypto modes explicitly designed to use POLYVAL.
+ *
+ * This implementation uses a convenient trick involving the GHASH and POLYVAL
+ * fields. This trick allows multiplication in the POLYVAL field to be
+ * implemented by using multiplication in the GHASH field as a subroutine. An
+ * element of the POLYVAL field can be converted to an element of the GHASH
+ * field by computing x*REVERSE(a), where REVERSE reverses the byte-ordering of
+ * a. Similarly, an element of the GHASH field can be converted back to the
+ * POLYVAL field by computing REVERSE(x^{-1}*a). For more information, see:
+ * https://datatracker.ietf.org/doc/html/rfc8452#appendix-A
+ *
+ * By using this trick, we do not need to implement the POLYVAL field for the
+ * generic implementation.
+ *
+ * Warning: this generic implementation is not intended to be used in practice
+ * and is not constant time. For practical use, a hardware accelerated
+ * implementation of POLYVAL should be used instead.
+ *
+ */
+
+#include <asm/unaligned.h>
+#include <crypto/algapi.h>
+#include <crypto/gf128mul.h>
+#include <crypto/polyval.h>
+#include <crypto/internal/hash.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+
+struct polyval_tfm_ctx {
+ struct gf128mul_4k *gf128;
+};
+
+struct polyval_desc_ctx {
+ union {
+ u8 buffer[POLYVAL_BLOCK_SIZE];
+ be128 buffer128;
+ };
+ u32 bytes;
+};
+
+static void copy_and_reverse(u8 dst[POLYVAL_BLOCK_SIZE],
+ const u8 src[POLYVAL_BLOCK_SIZE])
+{
+ u64 a = get_unaligned((const u64 *)&src[0]);
+ u64 b = get_unaligned((const u64 *)&src[8]);
+
+ put_unaligned(swab64(a), (u64 *)&dst[8]);
+ put_unaligned(swab64(b), (u64 *)&dst[0]);
+}
+
+static int polyval_setkey(struct crypto_shash *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct polyval_tfm_ctx *ctx = crypto_shash_ctx(tfm);
+ be128 k;
+
+ if (keylen != POLYVAL_BLOCK_SIZE)
+ return -EINVAL;
+
+ gf128mul_free_4k(ctx->gf128);
+
+ BUILD_BUG_ON(sizeof(k) != POLYVAL_BLOCK_SIZE);
+ copy_and_reverse((u8 *)&k, key);
+ gf128mul_x_lle(&k, &k);
+
+ ctx->gf128 = gf128mul_init_4k_lle(&k);
+ memzero_explicit(&k, POLYVAL_BLOCK_SIZE);
+
+ if (!ctx->gf128)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int polyval_init(struct shash_desc *desc)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+ memset(dctx, 0, sizeof(*dctx));
+
+ return 0;
+}
+
+static int polyval_update(struct shash_desc *desc,
+ const u8 *src, unsigned int srclen)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+ const struct polyval_tfm_ctx *ctx = crypto_shash_ctx(desc->tfm);
+ u8 *pos;
+ u8 tmp[POLYVAL_BLOCK_SIZE];
+ int n;
+
+ if (dctx->bytes) {
+ n = min(srclen, dctx->bytes);
+ pos = dctx->buffer + dctx->bytes - 1;
+
+ dctx->bytes -= n;
+ srclen -= n;
+
+ while (n--)
+ *pos-- ^= *src++;
+
+ if (!dctx->bytes)
+ gf128mul_4k_lle(&dctx->buffer128, ctx->gf128);
+ }
+
+ while (srclen >= POLYVAL_BLOCK_SIZE) {
+ copy_and_reverse(tmp, src);
+ crypto_xor(dctx->buffer, tmp, POLYVAL_BLOCK_SIZE);
+ gf128mul_4k_lle(&dctx->buffer128, ctx->gf128);
+ src += POLYVAL_BLOCK_SIZE;
+ srclen -= POLYVAL_BLOCK_SIZE;
+ }
+
+ if (srclen) {
+ dctx->bytes = POLYVAL_BLOCK_SIZE - srclen;
+ pos = dctx->buffer + POLYVAL_BLOCK_SIZE - 1;
+ while (srclen--)
+ *pos-- ^= *src++;
+ }
+
+ return 0;
+}
+
+static int polyval_final(struct shash_desc *desc, u8 *dst)
+{
+ struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+ const struct polyval_tfm_ctx *ctx = crypto_shash_ctx(desc->tfm);
+
+ if (dctx->bytes)
+ gf128mul_4k_lle(&dctx->buffer128, ctx->gf128);
+ copy_and_reverse(dst, dctx->buffer);
+ return 0;
+}
+
+static void polyval_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct polyval_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ gf128mul_free_4k(ctx->gf128);
+}
+
+static struct shash_alg polyval_alg = {
+ .digestsize = POLYVAL_DIGEST_SIZE,
+ .init = polyval_init,
+ .update = polyval_update,
+ .final = polyval_final,
+ .setkey = polyval_setkey,
+ .descsize = sizeof(struct polyval_desc_ctx),
+ .base = {
+ .cra_name = "polyval",
+ .cra_driver_name = "polyval-generic",
+ .cra_priority = 100,
+ .cra_blocksize = POLYVAL_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_exit = polyval_exit_tfm,
+ },
+};
+
+static int __init polyval_mod_init(void)
+{
+ return crypto_register_shash(&polyval_alg);
+}
+
+static void __exit polyval_mod_exit(void)
+{
+ crypto_unregister_shash(&polyval_alg);
+}
+
+subsys_initcall(polyval_mod_init);
+module_exit(polyval_mod_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("POLYVAL hash function");
+MODULE_ALIAS_CRYPTO("polyval");
+MODULE_ALIAS_CRYPTO("polyval-generic");
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index fd671d0e2012..dd9cf216029b 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -1730,6 +1730,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
ret += tcrypt_test("ccm(sm4)");
break;

+ case 57:
+ ret += tcrypt_test("polyval");
+ break;
+
case 100:
ret += tcrypt_test("hmac(md5)");
break;
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index fbb12d7d78af..d807b200edf6 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5284,6 +5284,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.hash = __VECS(poly1305_tv_template)
}
+ }, {
+ .alg = "polyval",
+ .test = alg_test_hash,
+ .suite = {
+ .hash = __VECS(polyval_tv_template)
+ }
}, {
.alg = "rfc3686(ctr(aes))",
.test = alg_test_skcipher,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index bf4ff97eeb37..c581e5405916 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -34929,4 +34929,175 @@ static const struct cipher_testvec aes_xctr_tv_template[] = {

};

+/*
+ * Test vectors generated using https://github.com/google/hctr2
+ *
+ * To ensure compatibility with RFC 8452, some tests were sourced from
+ * https://datatracker.ietf.org/doc/html/rfc8452
+ */
+static const struct hash_testvec polyval_tv_template[] = {
+ { // From RFC 8452
+ .key = "\x31\x07\x28\xd9\x91\x1f\x1f\x38"
+ "\x37\xb2\x43\x16\xc3\xfa\xb9\xa0",
+ .plaintext = "\x65\x78\x61\x6d\x70\x6c\x65\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x48\x65\x6c\x6c\x6f\x20\x77\x6f"
+ "\x72\x6c\x64\x00\x00\x00\x00\x00"
+ "\x38\x00\x00\x00\x00\x00\x00\x00"
+ "\x58\x00\x00\x00\x00\x00\x00\x00",
+ .digest = "\xad\x7f\xcf\x0b\x51\x69\x85\x16"
+ "\x62\x67\x2f\x3c\x5f\x95\x13\x8f",
+ .psize = 48,
+ .ksize = 16,
+ },
+ { // From RFC 8452
+ .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+ "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+ .plaintext = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .digest = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .psize = 16,
+ .ksize = 16,
+ },
+ { // From RFC 8452
+ .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+ "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+ .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x40\x00\x00\x00\x00\x00\x00\x00",
+ .digest = "\xeb\x93\xb7\x74\x09\x62\xc5\xe4"
+ "\x9d\x2a\x90\xa7\xdc\x5c\xec\x74",
+ .psize = 32,
+ .ksize = 16,
+ },
+ { // From RFC 8452
+ .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+ "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+ .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x02\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x03\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x80\x01\x00\x00\x00\x00\x00\x00",
+ .digest = "\x81\x38\x87\x46\xbc\x22\xd2\x6b"
+ "\x2a\xbc\x3d\xcb\x15\x75\x42\x22",
+ .psize = 64,
+ .ksize = 16,
+ },
+ { // From RFC 8452
+ .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+ "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+ .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x02\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x03\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x04\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x02\x00\x00\x00\x00\x00\x00",
+ .digest = "\x1e\x39\xb6\xd3\x34\x4d\x34\x8f"
+ "\x60\x44\xf8\x99\x35\xd1\xcf\x78",
+ .psize = 80,
+ .ksize = 16,
+ },
+ { // From RFC 8452
+ .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+ "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+ .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x02\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x03\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x04\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x05\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x08\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x02\x00\x00\x00\x00\x00\x00",
+ .digest = "\xff\xcd\x05\xd5\x77\x0f\x34\xad"
+ "\x92\x67\xf0\xa5\x99\x94\xb1\x5a",
+ .psize = 96,
+ .ksize = 16,
+ },
+ { // Random ( 1)
+ .key = "\x90\xcc\xac\xee\xba\xd7\xd4\x68"
+ "\x98\xa6\x79\x70\xdf\x66\x15\x6c",
+ .plaintext = "",
+ .digest = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .psize = 0,
+ .ksize = 16,
+ },
+ { // Random ( 1)
+ .key = "\xc1\x45\x71\xf0\x30\x07\x94\xe7"
+ "\x3a\xdd\xe4\xc6\x19\x2d\x02\xa2",
+ .plaintext = "\xc1\x5d\x47\xc7\x4c\x7c\x5e\x07"
+ "\x85\x14\x8f\x79\xcc\x73\x83\xf7"
+ "\x35\xb8\xcb\x73\x61\xf0\x53\x31"
+ "\xbf\x84\xde\xb6\xde\xaf\xb0\xb8"
+ "\xb7\xd9\x11\x91\x89\xfd\x1e\x4c"
+ "\x84\x4a\x1f\x2a\x87\xa4\xaf\x62"
+ "\x8d\x7d\x58\xf6\x43\x35\xfc\x53"
+ "\x8f\x1a\xf6\x12\xe1\x13\x3f\x66"
+ "\x91\x4b\x13\xd6\x45\xfb\xb0\x7a"
+ "\xe0\x8b\x8e\x99\xf7\x86\x46\x37"
+ "\xd1\x22\x9e\x52\xf3\x3f\xd9\x75"
+ "\x2c\x2c\xc6\xbb\x0e\x08\x14\x29"
+ "\xe8\x50\x2f\xd8\xbe\xf4\xe9\x69"
+ "\x4a\xee\xf7\xae\x15\x65\x35\x1e",
+ .digest = "\x00\x4f\x5d\xe9\x3b\xc0\xd6\x50"
+ "\x3e\x38\x73\x86\xc6\xda\xca\x7f",
+ .psize = 112,
+ .ksize = 16,
+ },
+ { // Random ( 1)
+ .key = "\x37\xbe\x68\x16\x50\xb9\x4e\xb0"
+ "\x47\xde\xe2\xbd\xde\xe4\x48\x09",
+ .plaintext = "\x87\xfc\x68\x9f\xff\xf2\x4a\x1e"
+ "\x82\x3b\x73\x8f\xc1\xb2\x1b\x7a"
+ "\x6c\x4f\x81\xbc\x88\x9b\x6c\xa3"
+ "\x9c\xc2\xa5\xbc\x14\x70\x4c\x9b"
+ "\x0c\x9f\x59\x92\x16\x4b\x91\x3d"
+ "\x18\x55\x22\x68\x12\x8c\x63\xb2"
+ "\x51\xcb\x85\x4b\xd2\xae\x0b\x1c"
+ "\x5d\x28\x9d\x1d\xb1\xc8\xf0\x77"
+ "\xe9\xb5\x07\x4e\x06\xc8\xee\xf8"
+ "\x1b\xed\x72\x2a\x55\x7d\x16\xc9"
+ "\xf2\x54\xe7\xe9\xe0\x44\x5b\x33"
+ "\xb1\x49\xee\xff\x43\xfb\x82\xcd"
+ "\x4a\x70\x78\x81\xa4\x34\x36\xe8"
+ "\x4c\x28\x54\xa6\x6c\xc3\x6b\x78"
+ "\xe7\xc0\x5d\xc6\x5d\x81\xab\x70"
+ "\x08\x86\xa1\xfd\xf4\x77\x55\xfd"
+ "\xa3\xe9\xe2\x1b\xdf\x99\xb7\x80"
+ "\xf9\x0a\x4f\x72\x4a\xd3\xaf\xbb"
+ "\xb3\x3b\xeb\x08\x58\x0f\x79\xce"
+ "\xa5\x99\x05\x12\x34\xd4\xf4\x86"
+ "\x37\x23\x1d\xc8\x49\xc0\x92\xae"
+ "\xa6\xac\x9b\x31\x55\xed\x15\xc6"
+ "\x05\x17\x37\x8d\x90\x42\xe4\x87"
+ "\x89\x62\x88\x69\x1c\x6a\xfd\xe3"
+ "\x00\x2b\x47\x1a\x73\xc1\x51\xc2"
+ "\xc0\x62\x74\x6a\x9e\xb2\xe5\x21"
+ "\xbe\x90\xb5\xb0\x50\xca\x88\x68"
+ "\xe1\x9d\x7a\xdf\x6c\xb7\xb9\x98"
+ "\xee\x28\x62\x61\x8b\xd1\x47\xf9"
+ "\x04\x7a\x0b\x5d\xcd\x2b\x65\xf5"
+ "\x12\xa3\xfe\x1a\xaa\x2c\x78\x42"
+ "\xb8\xbe\x7d\x74\xeb\x59\xba\xba",
+ .digest = "\xae\x11\xd4\x60\x2a\x5f\x9e\x42"
+ "\x89\x04\xc2\x34\x8d\x55\x94\x0a",
+ .psize = 256,
+ .ksize = 16,
+ },
+
+};
+
#endif /* _CRYPTO_TESTMGR_H */
diff --git a/include/crypto/polyval.h b/include/crypto/polyval.h
new file mode 100644
index 000000000000..b14c38aa9166
--- /dev/null
+++ b/include/crypto/polyval.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Common values for the Polyval hash algorithm
+ *
+ * Copyright 2021 Google LLC
+ */
+
+#ifndef _CRYPTO_POLYVAL_H
+#define _CRYPTO_POLYVAL_H
+
+#include <linux/types.h>
+#include <linux/crypto.h>
+
+#define POLYVAL_BLOCK_SIZE 16
+#define POLYVAL_DIGEST_SIZE 16
+
+#endif
--
2.36.1.124.g0e6072fb45-goog


2022-06-10 09:17:20

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] crypto: HCTR2 support

On Fri, May 20, 2022 at 06:14:52PM +0000, Nathan Huckleberry wrote:
> HCTR2 is a length-preserving encryption mode that is efficient on
> processors with instructions to accelerate AES and carryless
> multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM
> processors with the ARMv8 Crypto Extensions.
>
> HCTR2 is specified in https://ia.cr/2021/1441 "Length-preserving encryption
> with HCTR2" which shows that if AES is secure and HCTR2 is instantiated
> with AES, then HCTR2 is secure. Reference code and test vectors are at
> https://github.com/google/hctr2.
>
> As a length-preserving encryption mode, HCTR2 is suitable for applications
> such as storage encryption where ciphertext expansion is not possible, and
> thus authenticated encryption cannot be used. Currently, such applications
> usually use XTS, or in some cases Adiantum. XTS has the disadvantage that
> it is a narrow-block mode: a bitflip will only change 16 bytes in the
> resulting ciphertext or plaintext. This reveals more information to an
> attacker than necessary.
>
> HCTR2 is a wide-block mode, so it provides a stronger security property: a
> bitflip will change the entire message. HCTR2 is somewhat similar to
> Adiantum, which is also a wide-block mode. However, HCTR2 is designed to
> take advantage of existing crypto instructions, while Adiantum targets
> devices without such hardware support. Adiantum is also designed with
> longer messages in mind, while HCTR2 is designed to be efficient even on
> short messages.
>
> The first intended use of this mode in the kernel is for the encryption of
> filenames, where for efficiency reasons encryption must be fully
> deterministic (only one ciphertext for each plaintext) and the existing CBC
> solution leaks more information than necessary for filenames with common
> prefixes.
>
> HCTR2 uses two passes of an ε-almost-∆-universal hash function called
> POLYVAL and one pass of a block cipher mode called XCTR. POLYVAL is a
> polynomial hash designed for efficiency on modern processors and was
> originally specified for use in AES-GCM-SIV (RFC 8452). XCTR mode is a
> variant of CTR mode that is more efficient on little-endian machines.
>
> This patchset adds HCTR2 to Linux's crypto API, including generic
> implementations of XCTR and POLYVAL, hardware accelerated implementations
> of XCTR and POLYVAL for both x86-64 and ARM64, a templated implementation
> of HCTR2, and an fscrypt policy for using HCTR2 for filename encryption.
>
> Changes in v9:
> * Fix redefinition error
>
> Changes in v8:
> * Fix incorrect x86 POLYVAL comment
> * Add additional comments to ARM64 XCTR/CTR implementation
>
> Changes in v7:
> * Added/modified some comments in ARM64 XCTR/CTR implementation
> * Various small style fixes
>
> Changes in v6:
> * Split ARM64 XCTR/CTR refactoring into separate patch
> * Allow simd POLYVAL implementations to be preempted
> * Fix uninitialized bug in HCTR2
> * Fix streamcipher name handling bug in HCTR2
> * Various small style fixes
>
> Changes in v5:
> * Refactor HCTR2 tweak hashing
> * Remove non-AVX x86-64 XCTR implementation
> * Combine arm64 CTR and XCTR modes
> * Comment and alias CTR and XCTR modes
> * Move generic fallback code for simd POLYVAL into polyval-generic.c
> * Various small style fixes
>
> Changes in v4:
> * Small style fixes in generic POLYVAL and XCTR
> * Move HCTR2 hash exporting/importing to helper functions
> * Rewrite montgomery reduction for x86-64 POLYVAL
> * Rewrite partial block handling for x86-64 POLYVAL
> * Optimize x86-64 POLYVAL loop handling
> * Remove ahash wrapper from x86-64 POLYVAL
> * Add simd-unavailable handling to x86-64 POLYVAL
> * Rewrite montgomery reduction for ARM64 POLYVAL
> * Rewrite partial block handling for ARM64 POLYVAL
> * Optimize ARM64 POLYVAL loop handling
> * Remove ahash wrapper from ARM64 POLYVAL
> * Add simd-unavailable handling to ARM64 POLYVAL
>
> Changes in v3:
> * Improve testvec coverage for XCTR, POLYVAL and HCTR2
> * Fix endianness bug in xctr.c
> * Fix alignment issues in polyval-generic.c
> * Optimize hctr2.c by exporting/importing hash states
> * Fix blockcipher name derivation in hctr2.c
> * Move x86-64 XCTR implementation into aes_ctrby8_avx-x86_64.S
> * Reuse ARM64 CTR mode tail handling in ARM64 XCTR
> * Fix x86-64 POLYVAL comments
> * Fix x86-64 POLYVAL key_powers type to match asm
> * Fix ARM64 POLYVAL comments
> * Fix ARM64 POLYVAL key_powers type to match asm
> * Add XTS + HCTR2 policy to fscrypt
>
> Nathan Huckleberry (9):
> crypto: xctr - Add XCTR support
> crypto: polyval - Add POLYVAL support
> crypto: hctr2 - Add HCTR2 support
> crypto: x86/aesni-xctr: Add accelerated implementation of XCTR
> crypto: arm64/aes-xctr: Add accelerated implementation of XCTR
> crypto: arm64/aes-xctr: Improve readability of XCTR and CTR modes
> crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of
> POLYVAL
> crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
> fscrypt: Add HCTR2 support for filename encryption
>
> Documentation/filesystems/fscrypt.rst | 22 +-
> arch/arm64/crypto/Kconfig | 9 +-
> arch/arm64/crypto/Makefile | 3 +
> arch/arm64/crypto/aes-glue.c | 80 +-
> arch/arm64/crypto/aes-modes.S | 349 +++--
> arch/arm64/crypto/polyval-ce-core.S | 361 ++++++
> arch/arm64/crypto/polyval-ce-glue.c | 191 +++
> arch/x86/crypto/Makefile | 3 +
> arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 232 ++--
> arch/x86/crypto/aesni-intel_glue.c | 114 +-
> arch/x86/crypto/polyval-clmulni_asm.S | 321 +++++
> arch/x86/crypto/polyval-clmulni_glue.c | 203 +++
> crypto/Kconfig | 39 +-
> crypto/Makefile | 3 +
> crypto/hctr2.c | 581 +++++++++
> crypto/polyval-generic.c | 245 ++++
> crypto/tcrypt.c | 10 +
> crypto/testmgr.c | 20 +
> crypto/testmgr.h | 1536 +++++++++++++++++++++++
> crypto/xctr.c | 191 +++
> fs/crypto/fscrypt_private.h | 2 +-
> fs/crypto/keysetup.c | 7 +
> fs/crypto/policy.c | 14 +-
> include/crypto/polyval.h | 22 +
> include/uapi/linux/fscrypt.h | 3 +-
> 25 files changed, 4362 insertions(+), 199 deletions(-)
> create mode 100644 arch/arm64/crypto/polyval-ce-core.S
> create mode 100644 arch/arm64/crypto/polyval-ce-glue.c
> create mode 100644 arch/x86/crypto/polyval-clmulni_asm.S
> create mode 100644 arch/x86/crypto/polyval-clmulni_glue.c
> create mode 100644 crypto/hctr2.c
> create mode 100644 crypto/polyval-generic.c
> create mode 100644 crypto/xctr.c
> create mode 100644 include/crypto/polyval.h
>
> --
> 2.36.1.124.g0e6072fb45-goog

All applied. Thanks.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt