2018-02-08 00:10:58

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 0/5] crypto: Speck support

Hello,

This series adds Speck support to the crypto API, including the Speck128
and Speck64 variants. Speck is a lightweight block cipher that can be
much faster than AES on processors that don't have AES instructions.

We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an
option for dm-crypt and fscrypt on Android, for low-end mobile devices
with older CPUs such as ARMv7 which don't have the Cryptography
Extensions. Currently, such devices are unencrypted because AES is not
fast enough, even when the NEON bit-sliced implementation of AES is
used. Other AES alternatives such as Blowfish, Twofish, Camellia,
Cast6, and Serpent aren't fast enough either; it seems that only a
modern ARX cipher can provide sufficient performance on these devices.

This is a replacement for our original proposal
(https://patchwork.kernel.org/patch/10101451/) which was to offer
ChaCha20 for these devices. However, the use of a stream cipher for
disk/file encryption with no space to store nonces would have been much
more insecure than we thought initially, given that it would be used on
top of flash storage as well as potentially on top of F2FS, neither of
which is guaranteed to overwrite data in-place.

Speck has been somewhat controversial due to its origin. Nevertheless,
it has a straightforward design (it's an ARX cipher), and it appears to
be the leading software-optimized lightweight block cipher currently,
with the most cryptanalysis. It's also easy to implement without side
channels, unlike AES. Moreover, we only intend Speck to be used when
the status quo is no encryption, due to AES not being fast enough.

We've also considered a novel length-preserving encryption mode based on
ChaCha20 and Poly1305. While theoretically attractive, such a mode
would be a brand new crypto construction and would be more complicated
and difficult to implement efficiently in comparison to Speck-XTS.

Thus, patch 1 adds a generic implementation of Speck, and the following
patches add a 32-bit ARM NEON implementation of Speck-XTS. The
NEON-accelerated implementation is much faster than the generic
implementation and therefore is the implementation that would primarily
be used in practice on the devices we are targeting.

There is no AArch64 implementation added, since such CPUs are likely to
have the Cryptography Extensions, allowing the use of AES.

Eric Biggers (5):
crypto: add support for the Speck block cipher
crypto: speck - export common helpers
crypto: arm/speck: add NEON-accelerated implementation of Speck-XTS
crypto: speck - add test vectors for Speck128-XTS
crypto: speck - add test vectors for Speck64-XTS

arch/arm/crypto/Kconfig | 6 +
arch/arm/crypto/Makefile | 2 +
arch/arm/crypto/speck-neon-core.S | 431 +++++++++++
arch/arm/crypto/speck-neon-glue.c | 290 ++++++++
crypto/Kconfig | 14 +
crypto/Makefile | 1 +
crypto/speck.c | 302 ++++++++
crypto/testmgr.c | 36 +
crypto/testmgr.h | 1478 +++++++++++++++++++++++++++++++++++++
include/crypto/speck.h | 62 ++
10 files changed, 2622 insertions(+)
create mode 100644 arch/arm/crypto/speck-neon-core.S
create mode 100644 arch/arm/crypto/speck-neon-glue.c
create mode 100644 crypto/speck.c
create mode 100644 include/crypto/speck.h

--
2.16.0.rc1.238.g530d649a79-goog


2018-02-08 00:11:03

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 1/5] crypto: add support for the Speck block cipher

Add a generic implementation of Speck, including the Speck128 and
Speck64 variants. Speck is a lightweight block cipher that can be much
faster than AES on processors that don't have AES instructions.

We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an
option for dm-crypt and fscrypt on Android, for low-end mobile devices
with older CPUs such as ARMv7 which don't have the Cryptography
Extensions. Currently, such devices are unencrypted because AES is not
fast enough, even when the NEON bit-sliced implementation of AES is
used. Other AES alternatives such as Blowfish, Twofish, Camellia,
Cast6, and Serpent aren't fast enough either; it seems that only a
modern ARX cipher can provide sufficient performance on these devices.

This is a replacement for our original proposal
(https://patchwork.kernel.org/patch/10101451/) which was to offer
ChaCha20 for these devices. However, the use of a stream cipher for
disk/file encryption with no space to store nonces would have been much
more insecure than we thought initially, given that it would be used on
top of flash storage as well as potentially on top of F2FS, neither of
which is guaranteed to overwrite data in-place.

Speck has been somewhat controversial due to its origin. Nevertheless,
it has a straightforward design (it's an ARX cipher), and it appears to
be the leading software-optimized lightweight block cipher currently,
with the most cryptanalysis. It's also easy to implement without side
channels, unlike AES. Moreover, we only intend Speck to be used when
the status quo is no encryption, due to AES not being fast enough.

We've also considered a novel length-preserving encryption mode based on
ChaCha20 and Poly1305. While theoretically attractive, such a mode
would be a brand new crypto construction and would be more complicated
and difficult to implement efficiently in comparison to Speck-XTS.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/Kconfig | 14 +++
crypto/Makefile | 1 +
crypto/speck.c | 294 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
crypto/testmgr.c | 18 ++++
crypto/testmgr.h | 120 +++++++++++++++++++++++
5 files changed, 447 insertions(+)
create mode 100644 crypto/speck.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index b75264b09a46..558eff07b799 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1508,6 +1508,20 @@ config CRYPTO_SERPENT_AVX2_X86_64
See also:
<http://www.cl.cam.ac.uk/~rja14/serpent.html>

+config CRYPTO_SPECK
+ tristate "Speck cipher algorithm"
+ select CRYPTO_ALGAPI
+ help
+ Speck is a lightweight block cipher that is tuned for optimal
+ performance in software (rather than hardware).
+
+ Speck may not be as secure as AES, and should only be used on systems
+ where AES is not fast enough.
+
+ See also: <https://eprint.iacr.org/2013/404.pdf>
+
+ If unsure, say N.
+
config CRYPTO_TEA
tristate "TEA, XTEA and XETA cipher algorithms"
select CRYPTO_ALGAPI
diff --git a/crypto/Makefile b/crypto/Makefile
index cdbc03b35510..ba6019471447 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -110,6 +110,7 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
obj-$(CONFIG_CRYPTO_SEED) += seed.o
+obj-$(CONFIG_CRYPTO_SPECK) += speck.o
obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
diff --git a/crypto/speck.c b/crypto/speck.c
new file mode 100644
index 000000000000..89860688bf00
--- /dev/null
+++ b/crypto/speck.c
@@ -0,0 +1,294 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Speck: a lightweight block cipher
+ *
+ * Copyright (c) 2018 Google, Inc
+ *
+ * Speck has 10 variants, including 5 block sizes. For now we only implement
+ * the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and
+ * Speck64/128. Speck${B}/${K} denotes the variant with a block size of B bits
+ * and a key size of K bits. The Speck128 variants are believed to be the most
+ * secure variants, and they use the same block size and key sizes as AES. The
+ * Speck64 variants are less secure, but on 32-bit processors are usually
+ * faster. The remaining variants (Speck32, Speck48, and Speck96) are even less
+ * secure and/or not as well suited for implementation on either 32-bit or
+ * 64-bit processors, so are omitted.
+ *
+ * Reference: "The Simon and Speck Families of Lightweight Block Ciphers"
+ * https://eprint.iacr.org/2013/404.pdf
+ */
+
+#include <asm/unaligned.h>
+#include <linux/bitops.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/module.h>
+
+/* Speck128 */
+
+#define SPECK128_BLOCK_SIZE 16
+
+#define SPECK128_128_KEY_SIZE 16
+#define SPECK128_128_NROUNDS 32
+
+#define SPECK128_192_KEY_SIZE 24
+#define SPECK128_192_NROUNDS 33
+
+#define SPECK128_256_KEY_SIZE 32
+#define SPECK128_256_NROUNDS 34
+
+struct speck128_tfm_ctx {
+ u64 round_keys[SPECK128_256_NROUNDS];
+ int nrounds;
+};
+
+static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
+{
+ *x = ror64(*x, 8);
+ *x += *y;
+ *x ^= k;
+ *y = rol64(*y, 3);
+ *y ^= *x;
+}
+
+static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
+{
+ *y ^= *x;
+ *y = ror64(*y, 3);
+ *x ^= k;
+ *x -= *y;
+ *x = rol64(*x, 8);
+}
+
+static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ const struct speck128_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+ u64 x = get_unaligned_le64(in + 0);
+ u64 y = get_unaligned_le64(in + 8);
+ int i;
+
+ for (i = 0; i < ctx->nrounds; i++)
+ speck128_round(&x, &y, ctx->round_keys[i]);
+
+ put_unaligned_le64(x, out + 0);
+ put_unaligned_le64(y, out + 8);
+}
+
+static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ const struct speck128_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+ u64 x = get_unaligned_le64(in + 0);
+ u64 y = get_unaligned_le64(in + 8);
+ int i;
+
+ for (i = ctx->nrounds - 1; i >= 0; i--)
+ speck128_unround(&x, &y, ctx->round_keys[i]);
+
+ put_unaligned_le64(x, out + 0);
+ put_unaligned_le64(y, out + 8);
+}
+
+static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct speck128_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+ u64 l[3];
+ u64 k;
+ int i;
+
+ switch (keylen) {
+ case SPECK128_128_KEY_SIZE:
+ l[0] = get_unaligned_le64(key + 0);
+ k = get_unaligned_le64(key + 8);
+ ctx->nrounds = SPECK128_128_NROUNDS;
+ for (i = 0; i < ctx->nrounds; i++) {
+ ctx->round_keys[i] = k;
+ speck128_round(&l[0], &k, i);
+ }
+ break;
+ case SPECK128_192_KEY_SIZE:
+ l[1] = get_unaligned_le64(key + 0);
+ l[0] = get_unaligned_le64(key + 8);
+ k = get_unaligned_le64(key + 16);
+ ctx->nrounds = SPECK128_192_NROUNDS;
+ for (i = 0; i < ctx->nrounds; i++) {
+ ctx->round_keys[i] = k;
+ speck128_round(&l[i % 2], &k, i);
+ }
+ break;
+ case SPECK128_256_KEY_SIZE:
+ l[2] = get_unaligned_le64(key + 0);
+ l[1] = get_unaligned_le64(key + 8);
+ l[0] = get_unaligned_le64(key + 16);
+ k = get_unaligned_le64(key + 24);
+ ctx->nrounds = SPECK128_256_NROUNDS;
+ for (i = 0; i < ctx->nrounds; i++) {
+ ctx->round_keys[i] = k;
+ speck128_round(&l[i % 3], &k, i);
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/* Speck64 */
+
+#define SPECK64_BLOCK_SIZE 8
+
+#define SPECK64_96_KEY_SIZE 12
+#define SPECK64_96_NROUNDS 26
+
+#define SPECK64_128_KEY_SIZE 16
+#define SPECK64_128_NROUNDS 27
+
+struct speck64_tfm_ctx {
+ u32 round_keys[SPECK64_128_NROUNDS];
+ int nrounds;
+};
+
+static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
+{
+ *x = ror32(*x, 8);
+ *x += *y;
+ *x ^= k;
+ *y = rol32(*y, 3);
+ *y ^= *x;
+}
+
+static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
+{
+ *y ^= *x;
+ *y = ror32(*y, 3);
+ *x ^= k;
+ *x -= *y;
+ *x = rol32(*x, 8);
+}
+
+static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ const struct speck64_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+ u32 x = get_unaligned_le32(in + 0);
+ u32 y = get_unaligned_le32(in + 4);
+ int i;
+
+ for (i = 0; i < ctx->nrounds; i++)
+ speck64_round(&x, &y, ctx->round_keys[i]);
+
+ put_unaligned_le32(x, out + 0);
+ put_unaligned_le32(y, out + 4);
+}
+
+static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ const struct speck64_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+ u32 x = get_unaligned_le32(in + 0);
+ u32 y = get_unaligned_le32(in + 4);
+ int i;
+
+ for (i = ctx->nrounds - 1; i >= 0; i--)
+ speck64_unround(&x, &y, ctx->round_keys[i]);
+
+ put_unaligned_le32(x, out + 0);
+ put_unaligned_le32(y, out + 4);
+}
+
+static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct speck64_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+ u32 l[3];
+ u32 k;
+ int i;
+
+ switch (keylen) {
+ case SPECK64_96_KEY_SIZE:
+ l[1] = get_unaligned_le32(key + 0);
+ l[0] = get_unaligned_le32(key + 4);
+ k = get_unaligned_le32(key + 8);
+ ctx->nrounds = SPECK64_96_NROUNDS;
+ for (i = 0; i < ctx->nrounds; i++) {
+ ctx->round_keys[i] = k;
+ speck64_round(&l[i % 2], &k, i);
+ }
+ break;
+ case SPECK64_128_KEY_SIZE:
+ l[2] = get_unaligned_le32(key + 0);
+ l[1] = get_unaligned_le32(key + 4);
+ l[0] = get_unaligned_le32(key + 8);
+ k = get_unaligned_le32(key + 12);
+ ctx->nrounds = SPECK64_128_NROUNDS;
+ for (i = 0; i < ctx->nrounds; i++) {
+ ctx->round_keys[i] = k;
+ speck64_round(&l[i % 3], &k, i);
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/* Algorithm definitions */
+
+static struct crypto_alg speck_algs[] = {
+ {
+ .cra_name = "speck128",
+ .cra_driver_name = "speck128-generic",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = SPECK128_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct speck128_tfm_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_u = {
+ .cipher = {
+ .cia_min_keysize = SPECK128_128_KEY_SIZE,
+ .cia_max_keysize = SPECK128_256_KEY_SIZE,
+ .cia_setkey = speck128_setkey,
+ .cia_encrypt = speck128_encrypt,
+ .cia_decrypt = speck128_decrypt
+ }
+ }
+ }, {
+ .cra_name = "speck64",
+ .cra_driver_name = "speck64-generic",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = SPECK64_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct speck64_tfm_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_u = {
+ .cipher = {
+ .cia_min_keysize = SPECK64_96_KEY_SIZE,
+ .cia_max_keysize = SPECK64_128_KEY_SIZE,
+ .cia_setkey = speck64_setkey,
+ .cia_encrypt = speck64_encrypt,
+ .cia_decrypt = speck64_decrypt
+ }
+ }
+ }
+};
+
+static int __init speck_module_init(void)
+{
+ return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs));
+}
+
+static void __exit speck_module_exit(void)
+{
+ crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs));
+}
+
+module_init(speck_module_init);
+module_exit(speck_module_exit);
+
+MODULE_DESCRIPTION("Speck block cipher (generic)");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Eric Biggers <[email protected]>");
+MODULE_ALIAS_CRYPTO("speck128");
+MODULE_ALIAS_CRYPTO("speck128-generic");
+MODULE_ALIAS_CRYPTO("speck64");
+MODULE_ALIAS_CRYPTO("speck64-generic");
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index d5e23a142a04..d5be42149e29 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3000,6 +3000,24 @@ static const struct alg_test_desc alg_test_descs[] = {
.dec = __VECS(serpent_dec_tv_template)
}
}
+ }, {
+ .alg = "ecb(speck128)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = {
+ .enc = __VECS(speck128_enc_tv_template),
+ .dec = __VECS(speck128_dec_tv_template),
+ }
+ }
+ }, {
+ .alg = "ecb(speck64)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = {
+ .enc = __VECS(speck64_enc_tv_template),
+ .dec = __VECS(speck64_dec_tv_template),
+ }
+ }
}, {
.alg = "ecb(tea)",
.test = alg_test_skcipher,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 6044f6906bd6..255de47f1d20 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -14323,6 +14323,126 @@ static const struct cipher_testvec serpent_xts_dec_tv_template[] = {
},
};

+/*
+ * Speck test vectors taken from the original paper:
+ * "The Simon and Speck Families of Lightweight Block Ciphers"
+ * https://eprint.iacr.org/2013/404.pdf
+ */
+
+static const struct cipher_testvec speck128_enc_tv_template[] = {
+ { /* Speck128/128 */
+ .key = "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .klen = 16,
+ .input = "\x20\x65\x71\x75\x69\x76\x61\x6c"
+ "\x20\x6d\x61\x64\x65\x20\x69\x74",
+ .ilen = 16,
+ .result = "\x65\x32\x78\x79\x51\x98\x5d\xa6"
+ "\x18\x0d\x57\x5c\xdf\xfe\x60\x78",
+ .rlen = 16,
+ }, { /* Speck128/192 */
+ .key = "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .klen = 24,
+ .input = "\x68\x69\x65\x66\x20\x48\x61\x72"
+ "\x65\x6e\x74\x20\x74\x6f\x20\x43",
+ .ilen = 16,
+ .result = "\x66\x55\x13\x13\x3a\xcf\xe4\x1b"
+ "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9",
+ .rlen = 16,
+ }, { /* Speck128/256 */
+ .key = "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .klen = 32,
+ .input = "\x49\x6e\x20\x74\x68\x6f\x73\x65"
+ "\x70\x6f\x6f\x6e\x65\x72\x2e\x20",
+ .ilen = 16,
+ .result = "\x3e\xf5\xc0\x05\x04\x01\x09\x41"
+ "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e",
+ .rlen = 16,
+ },
+};
+
+static const struct cipher_testvec speck128_dec_tv_template[] = {
+ { /* Speck128/128 */
+ .key = "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .klen = 16,
+ .input = "\x65\x32\x78\x79\x51\x98\x5d\xa6"
+ "\x18\x0d\x57\x5c\xdf\xfe\x60\x78",
+ .ilen = 16,
+ .result = "\x20\x65\x71\x75\x69\x76\x61\x6c"
+ "\x20\x6d\x61\x64\x65\x20\x69\x74",
+ .rlen = 16,
+ }, { /* Speck128/192 */
+ .key = "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .klen = 24,
+ .input = "\x66\x55\x13\x13\x3a\xcf\xe4\x1b"
+ "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9",
+ .ilen = 16,
+ .result = "\x68\x69\x65\x66\x20\x48\x61\x72"
+ "\x65\x6e\x74\x20\x74\x6f\x20\x43",
+ .rlen = 16,
+ }, { /* Speck128/256 */
+ .key = "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .klen = 32,
+ .input = "\x3e\xf5\xc0\x05\x04\x01\x09\x41"
+ "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e",
+ .ilen = 16,
+ .result = "\x49\x6e\x20\x74\x68\x6f\x73\x65"
+ "\x70\x6f\x6f\x6e\x65\x72\x2e\x20",
+ .rlen = 16,
+ },
+};
+
+static const struct cipher_testvec speck64_enc_tv_template[] = {
+ { /* Speck64/96 */
+ .key = "\x10\x11\x12\x13\x08\x09\x0a\x0b"
+ "\x00\x01\x02\x03",
+ .klen = 12,
+ .input = "\x20\x46\x61\x74\x65\x61\x6e\x73",
+ .ilen = 8,
+ .result = "\xec\x52\x79\x9f\x6c\x94\x75\x41",
+ .rlen = 8,
+ }, { /* Speck64/128 */
+ .key = "\x18\x19\x1a\x1b\x10\x11\x12\x13"
+ "\x08\x09\x0a\x0b\x00\x01\x02\x03",
+ .klen = 16,
+ .input = "\x74\x65\x72\x3b\x2d\x43\x75\x74",
+ .ilen = 8,
+ .result = "\x48\xa5\x6f\x8c\x8b\x02\x4e\x45",
+ .rlen = 8,
+ },
+};
+
+static const struct cipher_testvec speck64_dec_tv_template[] = {
+ { /* Speck64/96 */
+ .key = "\x10\x11\x12\x13\x08\x09\x0a\x0b"
+ "\x00\x01\x02\x03",
+ .klen = 12,
+ .input = "\xec\x52\x79\x9f\x6c\x94\x75\x41",
+ .ilen = 8,
+ .result = "\x20\x46\x61\x74\x65\x61\x6e\x73",
+ .rlen = 8,
+ }, { /* Speck64/128 */
+ .key = "\x18\x19\x1a\x1b\x10\x11\x12\x13"
+ "\x08\x09\x0a\x0b\x00\x01\x02\x03",
+ .klen = 16,
+ .input = "\x48\xa5\x6f\x8c\x8b\x02\x4e\x45",
+ .ilen = 8,
+ .result = "\x74\x65\x72\x3b\x2d\x43\x75\x74",
+ .rlen = 8,
+ },
+};
+
/* Cast6 test vectors from RFC 2612 */
static const struct cipher_testvec cast6_enc_tv_template[] = {
{
--
2.16.0.rc1.238.g530d649a79-goog

2018-02-08 00:11:09

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 3/5] crypto: arm/speck: add NEON-accelerated implementation of Speck-XTS

Add an ARM NEON-accelerated implementation of Speck-XTS. It operates on
128-byte chunks at a time, i.e. 8 blocks for Speck128 or 16 blocks for
Speck64. Each 128-byte chunk goes through XTS preprocessing, then is
encrypted/decrypted (doing one cipher round for all the blocks, then the
next round, etc.), then goes through XTS postprocessing.

The performance depends on the processor but generally is 2-3 times
faster than the generic code. For example, on an ARMv7 processor we
observe the following performance with Speck128/128-XTS:

xts-speck128-neon: Encryption 103.2 MB/s, Decryption 103.3 MB/s
xts(speck128-generic): Encryption 28.9 MB/s, Decryption 37.1 MB/s

In comparison to AES-128-XTS without the cryptographic extensions:

xts-aes-neonbs: Encryption 50.7 MB/s, Decryption 45.4 MB/s
xts(aes-asm): Encryption 37.9 MB/s, Decryption 36.4 MB/s
xts(aes-generic): Encryption 24.7 MB/s, Decryption 24.6 MB/s

Speck64/128-XTS is even faster:

xts-speck64-neon: Encryption 125.5 MB/s, Decryption 126.1 MB/s

Note that as with the generic code, only the Speck128 and Speck64
variants are supported. Also, for now only the XTS mode of operation is
supported, to target the disk and file encryption use cases. The NEON
code also only handles the portion of the data that is evenly divisible
into 128-byte chunks, with any remainder handled by a C fallback. Of
course, other modes of operation could be added later if needed, and/or
the NEON code could be updated to handle other buffer sizes.

The XTS specification is only defined for AES which has a 128-bit block
size, so for the GF(2^64) math needed for Speck64-XTS we use the
reducing polynomial 'x^64 + x^4 + x^3 + x + 1' given by the original XEX
paper. Of course, when possible users should use Speck128-XTS, but even
that may be too slow on some processors; Speck64-XTS can be faster.

Signed-off-by: Eric Biggers <[email protected]>
---
arch/arm/crypto/Kconfig | 6 +
arch/arm/crypto/Makefile | 2 +
arch/arm/crypto/speck-neon-core.S | 431 ++++++++++++++++++++++++++++++++++++++
arch/arm/crypto/speck-neon-glue.c | 290 +++++++++++++++++++++++++
4 files changed, 729 insertions(+)
create mode 100644 arch/arm/crypto/speck-neon-core.S
create mode 100644 arch/arm/crypto/speck-neon-glue.c

diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
index b8e69fe282b8..925d1364727a 100644
--- a/arch/arm/crypto/Kconfig
+++ b/arch/arm/crypto/Kconfig
@@ -121,4 +121,10 @@ config CRYPTO_CHACHA20_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_CHACHA20

+config CRYPTO_SPECK_NEON
+ tristate "NEON accelerated Speck cipher algorithms"
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_SPECK
+
endif
diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
index 30ef8e291271..a758107c5525 100644
--- a/arch/arm/crypto/Makefile
+++ b/arch/arm/crypto/Makefile
@@ -10,6 +10,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o

ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
@@ -53,6 +54,7 @@ ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o
crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+speck-neon-y := speck-neon-core.o speck-neon-glue.o

quiet_cmd_perl = PERL $@
cmd_perl = $(PERL) $(<) > $(@)
diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S
new file mode 100644
index 000000000000..3c0829caa302
--- /dev/null
+++ b/arch/arm/crypto/speck-neon-core.S
@@ -0,0 +1,431 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+ *
+ * Copyright (c) 2018 Google, Inc
+ *
+ * Author: Eric Biggers <[email protected]>
+ */
+
+#include <linux/linkage.h>
+
+ .text
+ .fpu neon
+
+ // arguments
+ ROUND_KEYS .req r0 // const {u64,u32} *round_keys
+ NROUNDS .req r1 // int nrounds
+ DST .req r2 // void *dst
+ SRC .req r3 // const void *src
+ NBYTES .req r4 // unsigned int nbytes
+ TWEAK .req r5 // void *tweak
+
+ // registers which hold the data being encrypted/decrypted
+ X0 .req q0
+ X0_L .req d0
+ X0_H .req d1
+ Y0 .req q1
+ Y0_L .req d2
+ X1 .req q2
+ X1_L .req d4
+ X1_H .req d5
+ Y1 .req q3
+ Y1_L .req d6
+ X2 .req q4
+ X2_L .req d8
+ X2_H .req d9
+ Y2 .req q5
+ Y2_L .req d10
+ X3 .req q6
+ X3_L .req d12
+ X3_H .req d13
+ Y3 .req q7
+ Y3_L .req d14
+
+ // the round key, duplicated in all lanes
+ ROUND_KEY .req q8
+ ROUND_KEY_L .req d16
+ ROUND_KEY_H .req d17
+
+ // index vector for vtbl-based 8-bit rotates
+ ROTATE_TABLE .req d18
+
+ // multiplication table for updating XTS tweaks
+ GF128MUL_TABLE .req d19
+ GF64MUL_TABLE .req d19
+
+ // current XTS tweak value(s)
+ TWEAKV .req q10
+ TWEAKV_L .req d20
+ TWEAKV_H .req d21
+
+ TMP0 .req q12
+ TMP0_L .req d24
+ TMP0_H .req d25
+ TMP1 .req q13
+ TMP2 .req q14
+ TMP3 .req q15
+
+ .align 4
+.Lror64_8_table:
+ .byte 1, 2, 3, 4, 5, 6, 7, 0
+.Lror32_8_table:
+ .byte 1, 2, 3, 0, 5, 6, 7, 4
+.Lrol64_8_table:
+ .byte 7, 0, 1, 2, 3, 4, 5, 6
+.Lrol32_8_table:
+ .byte 3, 0, 1, 2, 7, 4, 5, 6
+.Lgf128mul_table:
+ .byte 0, 0x87
+ .fill 14
+.Lgf64mul_table:
+ .byte 0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b
+ .fill 12
+
+/*
+ * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+ *
+ * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+ * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+ * of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+ *
+ * The 8-bit rotates are implemented using vtbl instead of vshr + vsli because
+ * the vtbl approach is faster on some processors and the same speed on others.
+ */
+.macro _speck_round_128bytes n
+
+ // x = ror(x, 8)
+ vtbl.8 X0_L, {X0_L}, ROTATE_TABLE
+ vtbl.8 X0_H, {X0_H}, ROTATE_TABLE
+ vtbl.8 X1_L, {X1_L}, ROTATE_TABLE
+ vtbl.8 X1_H, {X1_H}, ROTATE_TABLE
+ vtbl.8 X2_L, {X2_L}, ROTATE_TABLE
+ vtbl.8 X2_H, {X2_H}, ROTATE_TABLE
+ vtbl.8 X3_L, {X3_L}, ROTATE_TABLE
+ vtbl.8 X3_H, {X3_H}, ROTATE_TABLE
+
+ // x += y
+ vadd.u\n X0, Y0
+ vadd.u\n X1, Y1
+ vadd.u\n X2, Y2
+ vadd.u\n X3, Y3
+
+ // x ^= k
+ veor X0, ROUND_KEY
+ veor X1, ROUND_KEY
+ veor X2, ROUND_KEY
+ veor X3, ROUND_KEY
+
+ // y = rol(y, 3)
+ vshl.u\n TMP0, Y0, #3
+ vshl.u\n TMP1, Y1, #3
+ vshl.u\n TMP2, Y2, #3
+ vshl.u\n TMP3, Y3, #3
+ vsri.u\n TMP0, Y0, #(\n - 3)
+ vsri.u\n TMP1, Y1, #(\n - 3)
+ vsri.u\n TMP2, Y2, #(\n - 3)
+ vsri.u\n TMP3, Y3, #(\n - 3)
+
+ // y ^= x
+ veor Y0, TMP0, X0
+ veor Y1, TMP1, X1
+ veor Y2, TMP2, X2
+ veor Y3, TMP3, X3
+.endm
+
+/*
+ * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+ *
+ * This is the inverse of _speck_round_128bytes().
+ */
+.macro _speck_unround_128bytes n
+
+ // y ^= x
+ veor TMP0, Y0, X0
+ veor TMP1, Y1, X1
+ veor TMP2, Y2, X2
+ veor TMP3, Y3, X3
+
+ // y = ror(y, 3)
+ vshr.u\n Y0, TMP0, #3
+ vshr.u\n Y1, TMP1, #3
+ vshr.u\n Y2, TMP2, #3
+ vshr.u\n Y3, TMP3, #3
+ vsli.u\n Y0, TMP0, #(\n - 3)
+ vsli.u\n Y1, TMP1, #(\n - 3)
+ vsli.u\n Y2, TMP2, #(\n - 3)
+ vsli.u\n Y3, TMP3, #(\n - 3)
+
+ // x ^= k
+ veor X0, ROUND_KEY
+ veor X1, ROUND_KEY
+ veor X2, ROUND_KEY
+ veor X3, ROUND_KEY
+
+ // x -= y
+ vsub.u\n X0, Y0
+ vsub.u\n X1, Y1
+ vsub.u\n X2, Y2
+ vsub.u\n X3, Y3
+
+ // x = rol(x, 8);
+ vtbl.8 X0_L, {X0_L}, ROTATE_TABLE
+ vtbl.8 X0_H, {X0_H}, ROTATE_TABLE
+ vtbl.8 X1_L, {X1_L}, ROTATE_TABLE
+ vtbl.8 X1_H, {X1_H}, ROTATE_TABLE
+ vtbl.8 X2_L, {X2_L}, ROTATE_TABLE
+ vtbl.8 X2_H, {X2_H}, ROTATE_TABLE
+ vtbl.8 X3_L, {X3_L}, ROTATE_TABLE
+ vtbl.8 X3_H, {X3_H}, ROTATE_TABLE
+.endm
+
+.macro _xts128_precrypt_one, dst_reg, tweak_buf, tmp
+
+ // Load the next source block
+ vld1.8 {\dst_reg}, [SRC]!
+
+ // Save the current tweak in the tweak buffer
+ vst1.8 {TWEAKV}, [\tweak_buf:128]!
+
+ // XOR the next source block with the current tweak
+ veor \dst_reg, TWEAKV
+
+ /*
+ * Calculate the next tweak by multiplying the current one by x,
+ * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+ */
+ vshr.u64 \tmp, TWEAKV, #63
+ vshl.u64 TWEAKV, #1
+ veor TWEAKV_H, \tmp\()_L
+ vtbl.8 \tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H
+ veor TWEAKV_L, \tmp\()_H
+.endm
+
+.macro _xts64_precrypt_two dst_reg, tweak_buf, tmp
+
+ // Load the next two source blocks
+ vld1.8 {\dst_reg}, [SRC]!
+
+ // Save the current two tweaks in the tweak buffer
+ vst1.8 {TWEAKV}, [\tweak_buf:128]!
+
+ // XOR the next two source blocks with the current two tweaks
+ veor \dst_reg, TWEAKV
+
+ /*
+ * Calculate the next two tweaks by multiplying the current ones by x^2,
+ * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+ */
+ vshr.u64 \tmp, TWEAKV, #62
+ vshl.u64 TWEAKV, #2
+ vtbl.8 \tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L
+ vtbl.8 \tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H
+ veor TWEAKV, \tmp
+.endm
+
+/*
+ * _speck_xts_crypt() - Speck XTS encryption/decryption
+ *
+ * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+ * using Speck-XTS, specifically the variant with a block size of '2n' and round
+ * count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and
+ * the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a
+ * nonzero multiple of 128.
+ */
+.macro _speck_xts_crypt n, decrypting
+ push {r4-r7}
+ mov r7, sp
+
+ /*
+ * The first four parameters were passed in registers r0-r3. Load the
+ * additional parameters, which were passed on the stack.
+ */
+ ldr NBYTES, [sp, #16]
+ ldr TWEAK, [sp, #20]
+
+ /*
+ * If decrypting, modify the ROUND_KEYS parameter to point to the last
+ * round key rather than the first, since for decryption the round keys
+ * are used in reverse order.
+ */
+.if \decrypting
+.if \n == 64
+ add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3
+ sub ROUND_KEYS, #8
+.else
+ add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2
+ sub ROUND_KEYS, #4
+.endif
+.endif
+
+ // Load the index vector for vtbl-based 8-bit rotates
+.if \decrypting
+ ldr r12, =.Lrol\n\()_8_table
+.else
+ ldr r12, =.Lror\n\()_8_table
+.endif
+ vld1.8 {ROTATE_TABLE}, [r12:64]
+
+ // One-time XTS preparation
+
+ /*
+ * Allocate stack space to store 128 bytes worth of tweaks. For
+ * performance, this space is aligned to a 16-byte boundary so that we
+ * can use the load/store instructions that declare 16-byte alignment.
+ */
+ sub sp, #128
+ bic sp, #0xf
+
+.if \n == 64
+ // Load first tweak
+ vld1.8 {TWEAKV}, [TWEAK]
+
+ // Load GF(2^128) multiplication table
+ ldr r12, =.Lgf128mul_table
+ vld1.8 {GF128MUL_TABLE}, [r12:64]
+.else
+ // Load first tweak
+ vld1.8 {TWEAKV_L}, [TWEAK]
+
+ // Load GF(2^64) multiplication table
+ ldr r12, =.Lgf64mul_table
+ vld1.8 {GF64MUL_TABLE}, [r12:64]
+
+ // Calculate second tweak, packing it together with the first
+ vshr.u64 TMP0_L, TWEAKV_L, #63
+ vtbl.u8 TMP0_L, {GF64MUL_TABLE}, TMP0_L
+ vshl.u64 TWEAKV_H, TWEAKV_L, #1
+ veor TWEAKV_H, TMP0_L
+.endif
+
+.Lnext_128bytes_\@:
+
+ /*
+ * Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak
+ * values, and save the tweaks on the stack for later. Then
+ * de-interleave the 'x' and 'y' elements of each block, i.e. make it so
+ * that the X[0-3] registers contain only the first halves of blocks,
+ * and the Y[0-3] registers contain only the second halves of blocks.
+ */
+ mov r12, sp
+.if \n == 64
+ _xts128_precrypt_one X0, r12, TMP0
+ _xts128_precrypt_one Y0, r12, TMP0
+ _xts128_precrypt_one X1, r12, TMP0
+ _xts128_precrypt_one Y1, r12, TMP0
+ _xts128_precrypt_one X2, r12, TMP0
+ _xts128_precrypt_one Y2, r12, TMP0
+ _xts128_precrypt_one X3, r12, TMP0
+ _xts128_precrypt_one Y3, r12, TMP0
+ vswp X0_H, Y0_L
+ vswp X1_H, Y1_L
+ vswp X2_H, Y2_L
+ vswp X3_H, Y3_L
+.else
+ _xts64_precrypt_two X0, r12, TMP0
+ _xts64_precrypt_two Y0, r12, TMP0
+ _xts64_precrypt_two X1, r12, TMP0
+ _xts64_precrypt_two Y1, r12, TMP0
+ _xts64_precrypt_two X2, r12, TMP0
+ _xts64_precrypt_two Y2, r12, TMP0
+ _xts64_precrypt_two X3, r12, TMP0
+ _xts64_precrypt_two Y3, r12, TMP0
+ vuzp.32 X0, Y0
+ vuzp.32 X1, Y1
+ vuzp.32 X2, Y2
+ vuzp.32 X3, Y3
+.endif
+
+ // Do the cipher rounds
+
+ mov r12, ROUND_KEYS
+ mov r6, NROUNDS
+
+.Lnext_round_\@:
+.if \decrypting
+.if \n == 64
+ vld1.64 ROUND_KEY_L, [r12]
+ sub r12, #8
+ vmov ROUND_KEY_H, ROUND_KEY_L
+.else
+ vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]
+ sub r12, #4
+.endif
+ _speck_unround_128bytes \n
+.else
+.if \n == 64
+ vld1.64 ROUND_KEY_L, [r12]!
+ vmov ROUND_KEY_H, ROUND_KEY_L
+.else
+ vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]!
+.endif
+ _speck_round_128bytes \n
+.endif
+ subs r6, r6, #1
+ bne .Lnext_round_\@
+
+ // Re-interleave the 'x' and 'y' elements of each block
+.if \n == 64
+ vswp X0_H, Y0_L
+ vswp X1_H, Y1_L
+ vswp X2_H, Y2_L
+ vswp X3_H, Y3_L
+.else
+ vzip.32 X0, Y0
+ vzip.32 X1, Y1
+ vzip.32 X2, Y2
+ vzip.32 X3, Y3
+.endif
+
+ // XOR the encrypted/decrypted blocks with the tweaks we saved earlier
+ mov r12, sp
+ vld1.8 {TMP0, TMP1}, [r12:128]!
+ vld1.8 {TMP2, TMP3}, [r12:128]!
+ veor X0, TMP0
+ veor Y0, TMP1
+ veor X1, TMP2
+ veor Y1, TMP3
+ vld1.8 {TMP0, TMP1}, [r12:128]!
+ vld1.8 {TMP2, TMP3}, [r12:128]!
+ veor X2, TMP0
+ veor Y2, TMP1
+ veor X3, TMP2
+ veor Y3, TMP3
+
+ // Store the ciphertext in the destination buffer
+ vst1.8 {X0, Y0}, [DST]!
+ vst1.8 {X1, Y1}, [DST]!
+ vst1.8 {X2, Y2}, [DST]!
+ vst1.8 {X3, Y3}, [DST]!
+
+ // Continue if there are more 128-byte chunks remaining, else return
+ subs NBYTES, #128
+ bne .Lnext_128bytes_\@
+
+ // Store the next tweak
+.if \n == 64
+ vst1.8 {TWEAKV}, [TWEAK]
+.else
+ vst1.8 {TWEAKV_L}, [TWEAK]
+.endif
+
+ mov sp, r7
+ pop {r4-r7}
+ bx lr
+.endm
+
+ENTRY(speck128_xts_encrypt_neon)
+ _speck_xts_crypt n=64, decrypting=0
+ENDPROC(speck128_xts_encrypt_neon)
+
+ENTRY(speck128_xts_decrypt_neon)
+ _speck_xts_crypt n=64, decrypting=1
+ENDPROC(speck128_xts_decrypt_neon)
+
+ENTRY(speck64_xts_encrypt_neon)
+ _speck_xts_crypt n=32, decrypting=0
+ENDPROC(speck64_xts_encrypt_neon)
+
+ENTRY(speck64_xts_decrypt_neon)
+ _speck_xts_crypt n=32, decrypting=1
+ENDPROC(speck64_xts_decrypt_neon)
diff --git a/arch/arm/crypto/speck-neon-glue.c b/arch/arm/crypto/speck-neon-glue.c
new file mode 100644
index 000000000000..3987dd6e063e
--- /dev/null
+++ b/arch/arm/crypto/speck-neon-glue.c
@@ -0,0 +1,290 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+ *
+ * Copyright (c) 2018 Google, Inc
+ *
+ * Note: the NIST recommendation for XTS only specifies a 128-bit block size,
+ * but a 64-bit version (needed for Speck64) is fairly straightforward; the math
+ * is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial
+ * x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004:
+ * "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes
+ * OCB and PMAC"), represented as 0x1B.
+ */
+
+#include <asm/hwcap.h>
+#include <asm/neon.h>
+#include <asm/simd.h>
+#include <crypto/algapi.h>
+#include <crypto/gf128mul.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/speck.h>
+#include <crypto/xts.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+
+/* The assembly functions only handle multiples of 128 bytes */
+#define SPECK_NEON_CHUNK_SIZE 128
+
+/* Speck128 */
+
+struct speck128_xts_tfm_ctx {
+ struct speck128_tfm_ctx main_key;
+ struct speck128_tfm_ctx tweak_key;
+};
+
+asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+ void *dst, const void *src,
+ unsigned int nbytes, void *tweak);
+
+asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+ void *dst, const void *src,
+ unsigned int nbytes, void *tweak);
+
+typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+ u8 *, const u8 *);
+typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+ const void *, unsigned int, void *);
+
+
+static __always_inline int
+__speck128_xts_crypt(struct skcipher_request *req,
+ speck128_crypt_one_t crypt_one,
+ speck128_xts_crypt_many_t crypt_many)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ le128 tweak;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, true);
+
+ crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+
+ while (walk.nbytes > 0) {
+ unsigned int nbytes = walk.nbytes;
+ u8 *dst = walk.dst.virt.addr;
+ const u8 *src = walk.src.virt.addr;
+
+ if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+ unsigned int count;
+
+ count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+ kernel_neon_begin();
+ (*crypt_many)(ctx->main_key.round_keys,
+ ctx->main_key.nrounds,
+ dst, src, count, &tweak);
+ kernel_neon_end();
+ dst += count;
+ src += count;
+ nbytes -= count;
+ }
+
+ /* Handle any remainder with generic code */
+ while (nbytes >= sizeof(le128)) {
+ le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+ (*crypt_one)(&ctx->main_key, dst, dst);
+ le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+ gf128mul_x_ble(&tweak, &tweak);
+
+ dst += sizeof(le128);
+ src += sizeof(le128);
+ nbytes -= sizeof(le128);
+ }
+ err = skcipher_walk_done(&walk, nbytes);
+ }
+
+ return err;
+}
+
+static int speck128_xts_encrypt(struct skcipher_request *req)
+{
+ return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+ speck128_xts_encrypt_neon);
+
+}
+
+static int speck128_xts_decrypt(struct skcipher_request *req)
+{
+ return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+ speck128_xts_decrypt_neon);
+}
+
+static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int err;
+
+ err = xts_verify_key(tfm, key, keylen);
+ if (err)
+ return err;
+
+ keylen /= 2;
+
+ err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+ if (err)
+ return err;
+
+ return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+}
+
+/* Speck64 */
+
+struct speck64_xts_tfm_ctx {
+ struct speck64_tfm_ctx main_key;
+ struct speck64_tfm_ctx tweak_key;
+};
+
+asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+ void *dst, const void *src,
+ unsigned int nbytes, void *tweak);
+
+asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+ void *dst, const void *src,
+ unsigned int nbytes, void *tweak);
+
+typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+ u8 *, const u8 *);
+typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+ const void *, unsigned int, void *);
+
+static __always_inline int
+__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+ speck64_xts_crypt_many_t crypt_many)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
+ u64 tweak;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, true);
+
+ crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+
+ while (walk.nbytes > 0) {
+ unsigned int nbytes = walk.nbytes;
+ u8 *dst = walk.dst.virt.addr;
+ const u8 *src = walk.src.virt.addr;
+
+ if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+ unsigned int count;
+
+ count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+ kernel_neon_begin();
+ (*crypt_many)(ctx->main_key.round_keys,
+ ctx->main_key.nrounds,
+ dst, src, count, &tweak);
+ kernel_neon_end();
+ dst += count;
+ src += count;
+ nbytes -= count;
+ }
+
+ /* Handle any remainder with generic code */
+ while (nbytes >= sizeof(u64)) {
+ *(u64 *)dst = *(u64 *)src ^ tweak;
+ (*crypt_one)(&ctx->main_key, dst, dst);
+ *(u64 *)dst ^= tweak;
+ tweak = (tweak << 1) ^
+ ((tweak & (1ULL << 63)) ? 0x1B : 0);
+
+ dst += sizeof(u64);
+ src += sizeof(u64);
+ nbytes -= sizeof(u64);
+ }
+ err = skcipher_walk_done(&walk, nbytes);
+ }
+
+ return err;
+}
+
+static int speck64_xts_encrypt(struct skcipher_request *req)
+{
+ return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+ speck64_xts_encrypt_neon);
+}
+
+static int speck64_xts_decrypt(struct skcipher_request *req)
+{
+ return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+ speck64_xts_decrypt_neon);
+}
+
+static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int err;
+
+ err = xts_verify_key(tfm, key, keylen);
+ if (err)
+ return err;
+
+ keylen /= 2;
+
+ err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+ if (err)
+ return err;
+
+ return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+}
+
+static struct skcipher_alg speck_algs[] = {
+ {
+ .base.cra_name = "xts(speck128)",
+ .base.cra_driver_name = "xts-speck128-neon",
+ .base.cra_priority = 300,
+ .base.cra_blocksize = SPECK128_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx),
+ .base.cra_alignmask = 7,
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = 2 * SPECK128_128_KEY_SIZE,
+ .max_keysize = 2 * SPECK128_256_KEY_SIZE,
+ .ivsize = SPECK128_BLOCK_SIZE,
+ .walksize = SPECK_NEON_CHUNK_SIZE,
+ .setkey = speck128_xts_setkey,
+ .encrypt = speck128_xts_encrypt,
+ .decrypt = speck128_xts_decrypt,
+ }, {
+ .base.cra_name = "xts(speck64)",
+ .base.cra_driver_name = "xts-speck64-neon",
+ .base.cra_priority = 300,
+ .base.cra_blocksize = SPECK64_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx),
+ .base.cra_alignmask = 7,
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = 2 * SPECK64_96_KEY_SIZE,
+ .max_keysize = 2 * SPECK64_128_KEY_SIZE,
+ .ivsize = SPECK64_BLOCK_SIZE,
+ .walksize = SPECK_NEON_CHUNK_SIZE,
+ .setkey = speck64_xts_setkey,
+ .encrypt = speck64_xts_encrypt,
+ .decrypt = speck64_xts_decrypt,
+ }
+};
+
+static int __init speck_neon_module_init(void)
+{
+ if (!(elf_hwcap & HWCAP_NEON))
+ return -ENODEV;
+ return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+}
+
+static void __exit speck_neon_module_exit(void)
+{
+ crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+}
+
+module_init(speck_neon_module_init);
+module_exit(speck_neon_module_exit);
+
+MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Eric Biggers <[email protected]>");
+MODULE_ALIAS_CRYPTO("xts(speck128)");
+MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+MODULE_ALIAS_CRYPTO("xts(speck64)");
+MODULE_ALIAS_CRYPTO("xts-speck64-neon");
--
2.16.0.rc1.238.g530d649a79-goog

2018-02-08 00:11:11

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 4/5] crypto: speck - add test vectors for Speck128-XTS

Add test vectors for Speck128-XTS, generated in userspace using C code.
The inputs were borrowed from the AES-XTS test vectors.

Both xts(speck128-generic) and xts-speck128-neon pass these tests.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/testmgr.c | 9 +
crypto/testmgr.h | 687 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 696 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index d5be42149e29..6583c11f0f0b 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3575,6 +3575,15 @@ static const struct alg_test_desc alg_test_descs[] = {
.dec = __VECS(serpent_xts_dec_tv_template)
}
}
+ }, {
+ .alg = "xts(speck128)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = {
+ .enc = __VECS(speck128_xts_enc_tv_template),
+ .dec = __VECS(speck128_xts_dec_tv_template),
+ }
+ }
}, {
.alg = "xts(twofish)",
.test = alg_test_skcipher,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 255de47f1d20..4c0fbf5cec75 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -14403,6 +14403,693 @@ static const struct cipher_testvec speck128_dec_tv_template[] = {
},
};

+/*
+ * Speck128-XTS test vectors, taken from the AES-XTS test vectors with the
+ * result recomputed with Speck128 as the cipher
+ */
+
+static const struct cipher_testvec speck128_xts_enc_tv_template[] = {
+ {
+ .key = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .klen = 32,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .ilen = 32,
+ .result = "\x3b\x99\x4a\x64\x74\x77\xac\xed"
+ "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
+ "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54"
+ "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42",
+ .rlen = 32,
+ }, {
+ .key = "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x22\x22\x22\x22\x22\x22\x22\x22"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 32,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .ilen = 32,
+ .result = "\x31\x9c\x54\x41\x95\xdd\x0b\x6d"
+ "\xa4\x68\xf7\x31\x49\x83\xd3\x9b"
+ "\x68\x83\x4e\x5f\x80\x78\x7f\x6f"
+ "\xd6\x93\xce\x6e\x0a\x9e\xfc\x85",
+ .rlen = 32,
+ }, {
+ .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+ "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+ "\x22\x22\x22\x22\x22\x22\x22\x22"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 32,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .ilen = 32,
+ .result = "\x96\xbc\x98\xee\x4b\x3d\xb7\x55"
+ "\xfb\xe2\x0f\x52\x9e\x35\x5e\x3c"
+ "\xab\xd2\xaf\xe2\x0c\xc6\x46\x77"
+ "\x4f\x32\x6a\xa5\x10\x75\xc9\x12",
+ .rlen = 32,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x31\x41\x59\x26\x53\x58\x97\x93"
+ "\x23\x84\x62\x64\x33\x83\x27\x95",
+ .klen = 32,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .ilen = 512,
+ .result = "\x53\x43\x2f\x73\x46\x14\xe8\xa0"
+ "\xd4\xf2\x41\x65\x6d\x95\xb9\x1d"
+ "\x12\xb5\x92\xd0\x5e\x8b\x85\x6d"
+ "\x88\xba\xb8\x89\x0d\x2d\x81\x2b"
+ "\x1a\x21\x28\x58\x3d\xcc\xb1\x5e"
+ "\x53\x1c\xa1\x12\xe1\x03\xb7\x34"
+ "\x37\xb8\x60\x28\x90\xcb\x16\x43"
+ "\x9d\x47\x8a\xc4\x4f\x47\x74\xa4"
+ "\x84\x9e\xb3\x3e\x5d\x4d\x95\x9b"
+ "\x43\xcb\xf7\x03\xa9\xe9\x9b\xf4"
+ "\xc5\x13\x3a\xa9\x2b\x18\xd8\xb6"
+ "\x58\xbd\x2a\xb9\x79\x4a\x8e\x99"
+ "\x9b\x57\x1a\x2e\xf8\x3e\xd2\x8b"
+ "\x52\x31\x1c\x45\x73\xb6\x12\x8f"
+ "\xda\xf0\xa1\x8d\x9f\xf4\xaa\x7b"
+ "\xc1\x6b\xbc\xe3\x4b\xda\xd8\x91"
+ "\xe3\x48\x73\x06\x08\xf6\x01\xf1"
+ "\x41\xba\x46\x08\xbf\x22\x3d\x74"
+ "\x44\x12\xbc\x43\x34\x66\x94\x68"
+ "\xb2\xb7\x56\x26\xdc\x7f\x1e\xd2"
+ "\xb2\xfc\x52\x50\xbf\x4c\x02\xf8"
+ "\xb5\xca\x2f\xe9\x00\xab\x28\x6c"
+ "\xbe\x9e\x27\x10\x2f\xea\x76\x70"
+ "\xfa\x18\x6d\x56\x35\x37\xc5\xc1"
+ "\x20\xef\xf4\xcb\x85\x05\x0f\xe2"
+ "\xe9\x0a\xff\x1e\xad\x0f\xb2\x99"
+ "\xc6\x4c\x8d\xd9\x97\xc6\x9e\x7c"
+ "\xa0\x19\x29\x1e\xe7\x23\xda\x41"
+ "\xe7\x18\x00\x24\xec\x02\xef\x96"
+ "\xec\x51\xcc\xe9\x7c\xbc\x00\xa9"
+ "\x99\x53\x78\x1d\xe4\xac\x9e\x42"
+ "\xf2\x2f\xc4\x75\x17\x42\xa2\x7b"
+ "\x0e\xc9\xf0\xd7\x79\xd9\x89\x33"
+ "\x50\x4c\xaa\x5c\xc3\x65\xd0\x0b"
+ "\xba\xb9\x5f\x94\x91\x07\xa3\xd2"
+ "\x8c\xe3\xfc\xea\x9a\xad\xa4\xc2"
+ "\x5a\xd9\x34\x99\x67\xb4\xbe\x75"
+ "\x30\x55\x9f\x6a\x32\x1d\xb9\x00"
+ "\xd3\x31\xac\xe9\x64\x62\x4c\x53"
+ "\xd0\xcc\x4a\x06\xe5\x14\x76\x9e"
+ "\xa4\x6c\x27\x2f\x93\x74\xca\x33"
+ "\x5e\xf2\xfa\x1b\x6e\xc5\x06\x85"
+ "\x1b\x39\xac\x53\xb2\xdd\x2e\xe7"
+ "\x76\x3a\x92\xcf\x17\x0a\x22\xb6"
+ "\x2f\x99\x28\xff\x93\xca\xd0\x66"
+ "\x9b\xe9\x0c\xe2\x93\xd3\x59\x1e"
+ "\x62\x9e\x54\x4d\xeb\x67\x70\x2c"
+ "\x78\x44\x5c\x0a\xcc\x9b\xf9\xb5"
+ "\x33\xe6\x84\x99\xcd\x32\xa6\x0c"
+ "\x54\x38\x11\x55\xda\x05\x0b\xc2"
+ "\xdc\x0a\x9b\x1e\xcb\xa1\x4e\x8a"
+ "\x66\x38\x8f\x68\xc5\x21\x0f\x5e"
+ "\x9e\x60\x3c\x46\x5e\xbe\x1a\xc1"
+ "\x61\x45\xc0\x23\x69\x90\xab\xb9"
+ "\x82\x43\xd6\x36\x8d\x7b\x88\x63"
+ "\x39\x70\x09\x5b\x12\x63\xc3\x7a"
+ "\xb1\x3e\xbf\x14\xf9\x8b\x1d\xb0"
+ "\x39\xb7\x3a\xbb\x93\x59\xf4\x8b"
+ "\x06\xbe\x89\xfb\x7b\xb8\xba\xf3"
+ "\x9a\x40\xca\x17\x7c\x5d\x4f\xef"
+ "\x7d\x8e\x0a\x93\x93\x87\xdf\xec"
+ "\xdc\xa6\x42\x7b\x92\x7b\xdd\x9c"
+ "\xed\x6c\x16\xb2\xce\x12\x70\x2d"
+ "\x41\xee\x69\x24\xc2\x95\xa0\x18",
+ .rlen = 512,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x62\x49\x77\x57\x24\x70\x93\x69"
+ "\x99\x59\x57\x49\x66\x96\x76\x27"
+ "\x31\x41\x59\x26\x53\x58\x97\x93"
+ "\x23\x84\x62\x64\x33\x83\x27\x95"
+ "\x02\x88\x41\x97\x16\x93\x99\x37"
+ "\x51\x05\x82\x09\x74\x94\x45\x92",
+ .klen = 64,
+ .iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .ilen = 512,
+ .result = "\x15\xfb\x91\x93\xc1\x29\x04\xaf"
+ "\x1b\xaf\x23\x4e\x5b\xfd\x81\x86"
+ "\x76\xef\x3a\xef\xd7\x77\x36\x44"
+ "\xda\xcb\xbb\x2e\xa9\x59\x56\xba"
+ "\x31\x0c\x76\x3c\x2a\xe7\x6b\xba"
+ "\xb4\xbd\x03\xe0\xd9\xe2\x3f\xea"
+ "\xa0\x10\xf2\xa6\x0f\x1b\xf6\x75"
+ "\x45\x90\x58\xbb\xce\xd6\xdf\x82"
+ "\x96\x85\x64\x59\x00\x45\x19\x88"
+ "\x81\xa1\x83\xa5\x05\x2b\x22\x91"
+ "\xeb\x25\x24\xca\x9e\x15\x79\xff"
+ "\x66\xb5\xa2\xef\x22\x4a\xbd\xd5"
+ "\x80\x0d\x1e\x04\xd3\xb6\xd6\x79"
+ "\x17\x31\xfb\xe7\x48\xc7\x57\x56"
+ "\x03\x8f\x32\x5f\x8a\x08\x12\xbd"
+ "\xd5\xcb\xba\x48\x2c\x49\xa5\xcc"
+ "\xbb\xb6\x13\x1a\x43\x63\x57\x7c"
+ "\xa5\xfc\xc1\x4c\x87\xfe\xab\x4a"
+ "\x5e\x73\x42\x7c\x08\x04\x57\xf5"
+ "\x56\xe0\x4f\xe6\x1d\xe2\xcc\x51"
+ "\xdd\xb0\xc0\x3b\x23\xf3\x9e\x67"
+ "\x27\x7d\x4b\x18\x4f\xdd\x71\xe7"
+ "\x86\x6b\x71\x4f\xb1\xc4\x24\x44"
+ "\x3e\x8b\x7e\xac\xdf\x01\x7d\xf7"
+ "\x35\x01\xaf\x79\xfa\xeb\xd6\xae"
+ "\x50\x47\xe9\x7f\x3d\xdf\x3f\x46"
+ "\x99\x0d\xd0\x50\x1d\x5e\x4b\x6b"
+ "\x2c\xd8\xe1\x59\xb2\xb9\x53\x9c"
+ "\xa7\xc7\xba\xa9\xcf\xd6\xd5\xbe"
+ "\xd9\x70\xd5\x88\x4f\x05\x89\x76"
+ "\x5f\x95\x21\x12\x3f\x34\xe8\x4b"
+ "\x5a\xba\xcf\xd4\x7d\x61\x1c\x59"
+ "\xd9\xd2\xe8\x52\xb3\xa1\x64\x14"
+ "\x0b\x88\xf3\x96\x0d\xc0\x6a\xca"
+ "\xd0\xe4\x43\xe5\x1b\x1d\x35\xef"
+ "\xac\x4c\x09\x09\xf4\x2a\x4b\x85"
+ "\xd6\xf4\xb0\x4e\xbe\x49\x69\xf8"
+ "\x9b\xc9\xc4\x7f\xac\x9f\x11\x62"
+ "\xed\x6c\x93\x2c\x17\xc7\x13\xd0"
+ "\x8c\xfd\xdd\x89\x73\xcf\x27\x4f"
+ "\x19\x33\xf8\x72\xb7\x45\x91\x60"
+ "\xbe\x30\x7d\x06\x43\x70\x98\xcb"
+ "\x62\x12\x1d\x5a\x3d\xf2\x75\xe2"
+ "\x3f\x3c\x26\xb6\xea\x6d\xf8\xe9"
+ "\x1e\xad\x69\xae\x50\x50\xf6\x71"
+ "\xda\x33\xc0\x21\x30\x42\xf9\xcd"
+ "\x59\xb3\x5f\xde\xd5\xaa\x63\xe4"
+ "\xee\x53\xc4\xce\xcf\xc3\x23\xcd"
+ "\xd7\xf4\x40\x64\x35\x7e\x3a\x84"
+ "\x76\xca\x52\x3c\x27\xdd\xd3\xa4"
+ "\x1e\x9f\x80\x46\x37\xf6\x31\x29"
+ "\x6d\xe9\x51\x87\xe6\x7c\xf6\xda"
+ "\x98\x87\x61\x4c\x26\xc0\xdf\xd8"
+ "\xe1\xf4\x42\x6e\xbb\x30\xe1\xe8"
+ "\x48\x9d\x86\x0f\xea\x8f\x35\x74"
+ "\xaa\x49\x99\x81\xd4\x4b\xe8\x77"
+ "\x09\x27\xd2\x47\xae\xe7\x8e\x5e"
+ "\xbe\x57\x53\xf7\x61\x15\x89\xf9"
+ "\xe9\xcc\x51\xbc\x3e\x1a\x4c\x1b"
+ "\x4d\xbb\x62\xba\xb3\x1a\xd0\xdf"
+ "\x1c\xaa\x5c\x9f\x5b\x0f\x7b\xda"
+ "\x69\x67\x2f\x23\xf0\x91\xd8\x09"
+ "\x3f\xce\x51\x68\x6a\x20\x7c\x79"
+ "\x59\x9d\x0d\x9f\xac\xac\x98\xc0",
+ .rlen = 512,
+ .also_non_np = 1,
+ .np = 3,
+ .tap = { 512 - 20, 4, 16 },
+ }
+};
+
+static const struct cipher_testvec speck128_xts_dec_tv_template[] = {
+ {
+ .key = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .klen = 32,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x3b\x99\x4a\x64\x74\x77\xac\xed"
+ "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
+ "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54"
+ "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42",
+ .ilen = 32,
+ .result = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .rlen = 32,
+ }, {
+ .key = "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x22\x22\x22\x22\x22\x22\x22\x22"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 32,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x31\x9c\x54\x41\x95\xdd\x0b\x6d"
+ "\xa4\x68\xf7\x31\x49\x83\xd3\x9b"
+ "\x68\x83\x4e\x5f\x80\x78\x7f\x6f"
+ "\xd6\x93\xce\x6e\x0a\x9e\xfc\x85",
+ .ilen = 32,
+ .result = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .rlen = 32,
+ }, {
+ .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+ "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+ "\x22\x22\x22\x22\x22\x22\x22\x22"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 32,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x96\xbc\x98\xee\x4b\x3d\xb7\x55"
+ "\xfb\xe2\x0f\x52\x9e\x35\x5e\x3c"
+ "\xab\xd2\xaf\xe2\x0c\xc6\x46\x77"
+ "\x4f\x32\x6a\xa5\x10\x75\xc9\x12",
+ .ilen = 32,
+ .result = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .rlen = 32,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x31\x41\x59\x26\x53\x58\x97\x93"
+ "\x23\x84\x62\x64\x33\x83\x27\x95",
+ .klen = 32,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x53\x43\x2f\x73\x46\x14\xe8\xa0"
+ "\xd4\xf2\x41\x65\x6d\x95\xb9\x1d"
+ "\x12\xb5\x92\xd0\x5e\x8b\x85\x6d"
+ "\x88\xba\xb8\x89\x0d\x2d\x81\x2b"
+ "\x1a\x21\x28\x58\x3d\xcc\xb1\x5e"
+ "\x53\x1c\xa1\x12\xe1\x03\xb7\x34"
+ "\x37\xb8\x60\x28\x90\xcb\x16\x43"
+ "\x9d\x47\x8a\xc4\x4f\x47\x74\xa4"
+ "\x84\x9e\xb3\x3e\x5d\x4d\x95\x9b"
+ "\x43\xcb\xf7\x03\xa9\xe9\x9b\xf4"
+ "\xc5\x13\x3a\xa9\x2b\x18\xd8\xb6"
+ "\x58\xbd\x2a\xb9\x79\x4a\x8e\x99"
+ "\x9b\x57\x1a\x2e\xf8\x3e\xd2\x8b"
+ "\x52\x31\x1c\x45\x73\xb6\x12\x8f"
+ "\xda\xf0\xa1\x8d\x9f\xf4\xaa\x7b"
+ "\xc1\x6b\xbc\xe3\x4b\xda\xd8\x91"
+ "\xe3\x48\x73\x06\x08\xf6\x01\xf1"
+ "\x41\xba\x46\x08\xbf\x22\x3d\x74"
+ "\x44\x12\xbc\x43\x34\x66\x94\x68"
+ "\xb2\xb7\x56\x26\xdc\x7f\x1e\xd2"
+ "\xb2\xfc\x52\x50\xbf\x4c\x02\xf8"
+ "\xb5\xca\x2f\xe9\x00\xab\x28\x6c"
+ "\xbe\x9e\x27\x10\x2f\xea\x76\x70"
+ "\xfa\x18\x6d\x56\x35\x37\xc5\xc1"
+ "\x20\xef\xf4\xcb\x85\x05\x0f\xe2"
+ "\xe9\x0a\xff\x1e\xad\x0f\xb2\x99"
+ "\xc6\x4c\x8d\xd9\x97\xc6\x9e\x7c"
+ "\xa0\x19\x29\x1e\xe7\x23\xda\x41"
+ "\xe7\x18\x00\x24\xec\x02\xef\x96"
+ "\xec\x51\xcc\xe9\x7c\xbc\x00\xa9"
+ "\x99\x53\x78\x1d\xe4\xac\x9e\x42"
+ "\xf2\x2f\xc4\x75\x17\x42\xa2\x7b"
+ "\x0e\xc9\xf0\xd7\x79\xd9\x89\x33"
+ "\x50\x4c\xaa\x5c\xc3\x65\xd0\x0b"
+ "\xba\xb9\x5f\x94\x91\x07\xa3\xd2"
+ "\x8c\xe3\xfc\xea\x9a\xad\xa4\xc2"
+ "\x5a\xd9\x34\x99\x67\xb4\xbe\x75"
+ "\x30\x55\x9f\x6a\x32\x1d\xb9\x00"
+ "\xd3\x31\xac\xe9\x64\x62\x4c\x53"
+ "\xd0\xcc\x4a\x06\xe5\x14\x76\x9e"
+ "\xa4\x6c\x27\x2f\x93\x74\xca\x33"
+ "\x5e\xf2\xfa\x1b\x6e\xc5\x06\x85"
+ "\x1b\x39\xac\x53\xb2\xdd\x2e\xe7"
+ "\x76\x3a\x92\xcf\x17\x0a\x22\xb6"
+ "\x2f\x99\x28\xff\x93\xca\xd0\x66"
+ "\x9b\xe9\x0c\xe2\x93\xd3\x59\x1e"
+ "\x62\x9e\x54\x4d\xeb\x67\x70\x2c"
+ "\x78\x44\x5c\x0a\xcc\x9b\xf9\xb5"
+ "\x33\xe6\x84\x99\xcd\x32\xa6\x0c"
+ "\x54\x38\x11\x55\xda\x05\x0b\xc2"
+ "\xdc\x0a\x9b\x1e\xcb\xa1\x4e\x8a"
+ "\x66\x38\x8f\x68\xc5\x21\x0f\x5e"
+ "\x9e\x60\x3c\x46\x5e\xbe\x1a\xc1"
+ "\x61\x45\xc0\x23\x69\x90\xab\xb9"
+ "\x82\x43\xd6\x36\x8d\x7b\x88\x63"
+ "\x39\x70\x09\x5b\x12\x63\xc3\x7a"
+ "\xb1\x3e\xbf\x14\xf9\x8b\x1d\xb0"
+ "\x39\xb7\x3a\xbb\x93\x59\xf4\x8b"
+ "\x06\xbe\x89\xfb\x7b\xb8\xba\xf3"
+ "\x9a\x40\xca\x17\x7c\x5d\x4f\xef"
+ "\x7d\x8e\x0a\x93\x93\x87\xdf\xec"
+ "\xdc\xa6\x42\x7b\x92\x7b\xdd\x9c"
+ "\xed\x6c\x16\xb2\xce\x12\x70\x2d"
+ "\x41\xee\x69\x24\xc2\x95\xa0\x18",
+ .ilen = 512,
+ .result = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .rlen = 512,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x62\x49\x77\x57\x24\x70\x93\x69"
+ "\x99\x59\x57\x49\x66\x96\x76\x27"
+ "\x31\x41\x59\x26\x53\x58\x97\x93"
+ "\x23\x84\x62\x64\x33\x83\x27\x95"
+ "\x02\x88\x41\x97\x16\x93\x99\x37"
+ "\x51\x05\x82\x09\x74\x94\x45\x92",
+ .klen = 64,
+ .iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x15\xfb\x91\x93\xc1\x29\x04\xaf"
+ "\x1b\xaf\x23\x4e\x5b\xfd\x81\x86"
+ "\x76\xef\x3a\xef\xd7\x77\x36\x44"
+ "\xda\xcb\xbb\x2e\xa9\x59\x56\xba"
+ "\x31\x0c\x76\x3c\x2a\xe7\x6b\xba"
+ "\xb4\xbd\x03\xe0\xd9\xe2\x3f\xea"
+ "\xa0\x10\xf2\xa6\x0f\x1b\xf6\x75"
+ "\x45\x90\x58\xbb\xce\xd6\xdf\x82"
+ "\x96\x85\x64\x59\x00\x45\x19\x88"
+ "\x81\xa1\x83\xa5\x05\x2b\x22\x91"
+ "\xeb\x25\x24\xca\x9e\x15\x79\xff"
+ "\x66\xb5\xa2\xef\x22\x4a\xbd\xd5"
+ "\x80\x0d\x1e\x04\xd3\xb6\xd6\x79"
+ "\x17\x31\xfb\xe7\x48\xc7\x57\x56"
+ "\x03\x8f\x32\x5f\x8a\x08\x12\xbd"
+ "\xd5\xcb\xba\x48\x2c\x49\xa5\xcc"
+ "\xbb\xb6\x13\x1a\x43\x63\x57\x7c"
+ "\xa5\xfc\xc1\x4c\x87\xfe\xab\x4a"
+ "\x5e\x73\x42\x7c\x08\x04\x57\xf5"
+ "\x56\xe0\x4f\xe6\x1d\xe2\xcc\x51"
+ "\xdd\xb0\xc0\x3b\x23\xf3\x9e\x67"
+ "\x27\x7d\x4b\x18\x4f\xdd\x71\xe7"
+ "\x86\x6b\x71\x4f\xb1\xc4\x24\x44"
+ "\x3e\x8b\x7e\xac\xdf\x01\x7d\xf7"
+ "\x35\x01\xaf\x79\xfa\xeb\xd6\xae"
+ "\x50\x47\xe9\x7f\x3d\xdf\x3f\x46"
+ "\x99\x0d\xd0\x50\x1d\x5e\x4b\x6b"
+ "\x2c\xd8\xe1\x59\xb2\xb9\x53\x9c"
+ "\xa7\xc7\xba\xa9\xcf\xd6\xd5\xbe"
+ "\xd9\x70\xd5\x88\x4f\x05\x89\x76"
+ "\x5f\x95\x21\x12\x3f\x34\xe8\x4b"
+ "\x5a\xba\xcf\xd4\x7d\x61\x1c\x59"
+ "\xd9\xd2\xe8\x52\xb3\xa1\x64\x14"
+ "\x0b\x88\xf3\x96\x0d\xc0\x6a\xca"
+ "\xd0\xe4\x43\xe5\x1b\x1d\x35\xef"
+ "\xac\x4c\x09\x09\xf4\x2a\x4b\x85"
+ "\xd6\xf4\xb0\x4e\xbe\x49\x69\xf8"
+ "\x9b\xc9\xc4\x7f\xac\x9f\x11\x62"
+ "\xed\x6c\x93\x2c\x17\xc7\x13\xd0"
+ "\x8c\xfd\xdd\x89\x73\xcf\x27\x4f"
+ "\x19\x33\xf8\x72\xb7\x45\x91\x60"
+ "\xbe\x30\x7d\x06\x43\x70\x98\xcb"
+ "\x62\x12\x1d\x5a\x3d\xf2\x75\xe2"
+ "\x3f\x3c\x26\xb6\xea\x6d\xf8\xe9"
+ "\x1e\xad\x69\xae\x50\x50\xf6\x71"
+ "\xda\x33\xc0\x21\x30\x42\xf9\xcd"
+ "\x59\xb3\x5f\xde\xd5\xaa\x63\xe4"
+ "\xee\x53\xc4\xce\xcf\xc3\x23\xcd"
+ "\xd7\xf4\x40\x64\x35\x7e\x3a\x84"
+ "\x76\xca\x52\x3c\x27\xdd\xd3\xa4"
+ "\x1e\x9f\x80\x46\x37\xf6\x31\x29"
+ "\x6d\xe9\x51\x87\xe6\x7c\xf6\xda"
+ "\x98\x87\x61\x4c\x26\xc0\xdf\xd8"
+ "\xe1\xf4\x42\x6e\xbb\x30\xe1\xe8"
+ "\x48\x9d\x86\x0f\xea\x8f\x35\x74"
+ "\xaa\x49\x99\x81\xd4\x4b\xe8\x77"
+ "\x09\x27\xd2\x47\xae\xe7\x8e\x5e"
+ "\xbe\x57\x53\xf7\x61\x15\x89\xf9"
+ "\xe9\xcc\x51\xbc\x3e\x1a\x4c\x1b"
+ "\x4d\xbb\x62\xba\xb3\x1a\xd0\xdf"
+ "\x1c\xaa\x5c\x9f\x5b\x0f\x7b\xda"
+ "\x69\x67\x2f\x23\xf0\x91\xd8\x09"
+ "\x3f\xce\x51\x68\x6a\x20\x7c\x79"
+ "\x59\x9d\x0d\x9f\xac\xac\x98\xc0",
+ .ilen = 512,
+ .result = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .rlen = 512,
+ .also_non_np = 1,
+ .np = 3,
+ .tap = { 512 - 20, 4, 16 },
+ }
+};
+
static const struct cipher_testvec speck64_enc_tv_template[] = {
{ /* Speck64/96 */
.key = "\x10\x11\x12\x13\x08\x09\x0a\x0b"
--
2.16.0.rc1.238.g530d649a79-goog

2018-02-08 00:11:14

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 5/5] crypto: speck - add test vectors for Speck64-XTS

Add test vectors for Speck64-XTS, generated in userspace using C code.
The inputs were borrowed from the AES-XTS test vectors, with key lengths
adjusted.

xts-speck64-neon passes these tests. However, they aren't currently
applicable for the generic XTS template, as that only supports a 128-bit
block size.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/testmgr.c | 9 +
crypto/testmgr.h | 671 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 680 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 6583c11f0f0b..4b5fce3910f8 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3584,6 +3584,15 @@ static const struct alg_test_desc alg_test_descs[] = {
.dec = __VECS(speck128_xts_dec_tv_template),
}
}
+ }, {
+ .alg = "xts(speck64)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = {
+ .enc = __VECS(speck64_xts_enc_tv_template),
+ .dec = __VECS(speck64_xts_dec_tv_template),
+ }
+ }
}, {
.alg = "xts(twofish)",
.test = alg_test_skcipher,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 4c0fbf5cec75..80448d40cae0 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -15130,6 +15130,677 @@ static const struct cipher_testvec speck64_dec_tv_template[] = {
},
};

+/*
+ * Speck64-XTS test vectors, taken from the AES-XTS test vectors with the result
+ * recomputed with Speck64 as the cipher, and key lengths adjusted
+ */
+
+static const struct cipher_testvec speck64_xts_enc_tv_template[] = {
+ {
+ .key = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .klen = 24,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .ilen = 32,
+ .result = "\x19\xd4\x7c\xa6\x84\xaf\x54\x07"
+ "\xab\x2b\xbb\x4a\x14\x85\x84\x8b"
+ "\xa7\xf3\x8d\x73\xd8\x8d\x15\xf2"
+ "\x1b\x80\xd6\xf7\xd7\x9f\x6b\x09",
+ .rlen = 32,
+ }, {
+ .key = "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 24,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .ilen = 32,
+ .result = "\x90\x76\x22\x16\x71\xbb\x7e\x7f"
+ "\x1a\xea\x2a\x0a\x7b\x64\xe5\x00"
+ "\xe4\xcc\xea\x57\xd7\xbd\xc1\xd4"
+ "\xf6\x00\xb0\x7d\xe7\x89\xc1\xd0",
+ .rlen = 32,
+ }, {
+ .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+ "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 24,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .ilen = 32,
+ .result = "\x51\x85\xe6\x16\x75\x2b\x8e\xd5"
+ "\xf7\xac\xbf\xcc\xa0\x13\x34\xfe"
+ "\x3f\x2d\xff\x66\x78\x0d\x08\xad"
+ "\x57\x62\xcf\xdb\x08\xdb\x00\xa8",
+ .rlen = 32,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x31\x41\x59\x26\x53\x58\x97\x93",
+ .klen = 24,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .ilen = 512,
+ .result = "\x9c\x9a\x9a\xf9\x76\xb1\x95\x75"
+ "\x46\x2d\x16\xfb\x72\xab\xf2\x14"
+ "\x97\x65\x3e\xd7\x92\x66\xc5\x8f"
+ "\x2d\x2d\x38\x77\x96\x81\xdf\x83"
+ "\xe2\xe1\xd4\x71\x3b\x96\x99\x2d"
+ "\x2c\x92\x87\xdd\x13\x7e\xd0\x7b"
+ "\x12\x54\x0b\x32\xea\xae\x67\xfa"
+ "\x47\xa0\x6d\xe9\x3e\x4e\x8e\x06"
+ "\xbc\xff\xeb\x4b\x2e\x2a\x6c\xf0"
+ "\x50\x0d\x2c\x86\xa7\x3d\x16\xd5"
+ "\xde\x3b\x66\xb2\x21\x9d\xc4\xa1"
+ "\x66\x24\x93\xa2\xe9\x2d\xcd\xf4"
+ "\x40\xfe\x2a\x77\xc1\xe7\xb2\x3d"
+ "\x66\xb1\x69\x4b\x9b\x5a\xc8\x29"
+ "\xc7\x44\x21\x63\x58\x8f\xfa\xe5"
+ "\x98\xc9\x44\x42\x8d\xa9\xb8\xd4"
+ "\x58\xcd\xfe\x27\x3f\xdb\x7e\x62"
+ "\x97\x58\x07\x2f\x25\x07\x88\x6b"
+ "\xae\x9e\x51\xe6\xc3\xa8\x1c\x31"
+ "\x7d\xdd\x6f\x78\x78\xb7\x8c\x4b"
+ "\x90\xbe\x4e\xbe\xa5\xe0\xc4\xe7"
+ "\x90\x27\x58\x8b\x5c\x94\x86\x0a"
+ "\x57\xa5\xae\x32\xa3\x70\xa7\x5f"
+ "\x99\x25\xec\x6e\x77\x29\xa6\xdb"
+ "\x96\xd6\x94\xb4\x3e\x6b\x86\x43"
+ "\x97\x95\x4c\xb5\x7a\x37\x77\x31"
+ "\x50\xa0\xec\xc3\x67\xb0\x45\x7f"
+ "\x2a\x23\x51\x6e\xc3\x92\x0b\x67"
+ "\x80\xbe\x92\x6c\xc0\xac\x29\xba"
+ "\x35\xd1\x8d\xb2\x3e\x57\xf1\x16"
+ "\x54\x01\xc4\xc4\x67\x8e\x31\xbb"
+ "\x63\xa0\x35\x30\xd9\xdd\x65\x5e"
+ "\x3a\xd7\x06\x02\x9b\x35\x93\x79"
+ "\x10\x14\xfa\x71\xb9\xc3\xb5\xb8"
+ "\xe3\xf0\x68\xfd\x2a\x57\xe8\x89"
+ "\x48\xd9\x87\xe9\x28\x81\x66\x53"
+ "\xab\xa1\xfe\xc5\x9b\xdf\xd9\x6e"
+ "\xc9\x61\xf3\x19\x13\x3a\xd6\x2f"
+ "\x2b\xa6\xbd\xae\x6c\x74\x64\x6a"
+ "\x31\x49\xee\x7e\xb5\xfa\x10\x7c"
+ "\x85\x1e\x7d\x9b\x92\x10\x27\x55"
+ "\x2c\x15\x58\x9c\x7f\x91\xaa\x02"
+ "\x82\x64\xe6\xaa\x5e\x31\xe5\x7d"
+ "\xd8\xb2\x15\x11\xfa\x8c\x3e\x6f"
+ "\x6c\x19\x99\xe0\x9d\x11\x6e\x9f"
+ "\xcc\xea\x71\x3b\x13\x4b\x0c\x8d"
+ "\x61\x76\x91\xab\xd3\x10\x23\x7d"
+ "\x03\x29\x87\x50\xb9\x3e\xbb\x90"
+ "\x10\x4a\x7d\x57\xa7\xd9\x5d\xce"
+ "\x40\x6b\xb8\x61\xdd\xe5\xaf\x68"
+ "\xef\x82\x68\x4b\xa8\x3d\x55\xfe"
+ "\xf4\xc0\xe5\xaf\x46\xba\xdd\xd7"
+ "\x26\x80\x0b\x67\x1f\xca\xb9\xa6"
+ "\x16\x89\x21\x72\xbd\x1a\xc2\x74"
+ "\xaa\x4b\x2b\x9b\x3a\xc9\x23\xa4"
+ "\x73\x93\x48\x69\x72\x38\xae\x74"
+ "\x2d\xe5\x45\x5e\xa3\xa9\x39\xf8"
+ "\xaf\x9a\x87\x4b\x3b\xdc\xf2\x80"
+ "\x6e\x57\x5a\x4f\xfd\x58\x89\x9f"
+ "\x6f\x37\x4d\x26\xbb\x05\x6f\xd5"
+ "\x74\x83\x3e\x9c\x14\xba\x1a\x8b"
+ "\x62\x8a\xb2\x70\xf8\xc1\x1d\x8d"
+ "\x7d\xc5\x73\x77\x7a\x61\xb4\x7a"
+ "\x80\x57\x43\xfe\x78\xfc\x96\xb6",
+ .rlen = 512,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x62\x49\x77\x57\x24\x70\x93\x69"
+ "\x99\x59\x57\x49\x66\x96\x76\x27",
+ .klen = 32,
+ .iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .ilen = 512,
+ .result = "\xac\x18\x5f\x9c\xe2\x40\x5c\x80"
+ "\xcb\x3d\x92\x20\xa3\xbd\x84\x4c"
+ "\xa3\xa7\xd1\x94\x53\x45\x7d\x87"
+ "\xa5\x0b\x7c\xaf\x0b\xf6\xdc\x66"
+ "\x93\xfa\x63\x45\x9e\xf3\xe3\x0e"
+ "\xe2\x34\x14\xac\xc4\xf5\xf1\xd1"
+ "\x52\xe1\x2f\xef\x41\x0b\x41\xba"
+ "\x20\x99\x44\xce\xbf\x29\x62\x16"
+ "\x41\x3c\x20\x21\x1d\xad\x14\x70"
+ "\xd4\xe5\xce\x4c\x2b\xe3\x6d\x06"
+ "\x44\x73\x55\xa8\x79\xd1\x4e\x42"
+ "\x0b\x6f\x66\x7f\xcc\xb9\x3b\x9b"
+ "\xb7\xa4\x99\x92\xfe\x67\xb1\xce"
+ "\xb5\x81\xcc\x34\x7f\x6f\xf9\x57"
+ "\x52\x26\x2a\x0f\xe9\x8b\x0b\x6d"
+ "\x36\xae\xd5\x07\x8c\x96\x4d\x63"
+ "\x65\xcf\x01\x4d\x64\x2f\x75\xbf"
+ "\xd4\xef\x8c\xff\x7f\xc9\x26\xef"
+ "\x1f\x55\x5b\x79\x46\x31\xc0\x2f"
+ "\xe7\x38\x1e\x5e\x97\x1c\x31\xc0"
+ "\x9d\x47\x13\xaf\x62\x0e\x71\x51"
+ "\xc3\xd3\x59\xa8\xee\x17\x07\x24"
+ "\x11\x07\xf0\x8e\xc9\xd8\x3c\xd4"
+ "\x5f\x24\xa0\x7b\x25\x51\x43\xc8"
+ "\xb2\xf2\x3d\xf7\x6b\xb2\x11\x64"
+ "\x46\x65\xe0\x59\xd8\x6c\xa4\x9c"
+ "\x2e\x42\xfc\x91\xd5\x32\xe0\x34"
+ "\x1b\xea\x5c\xf1\x0a\x99\x64\x3d"
+ "\xb6\xf0\xb6\x5e\x03\x55\x69\x5b"
+ "\x35\x37\x54\xda\x47\x7f\xc7\x4c"
+ "\xbb\xb5\x4b\x6f\xd9\x1f\x47\xcd"
+ "\xbe\x07\xce\xef\x30\x49\xce\xc1"
+ "\x3f\x16\x4d\x6d\xb6\x5c\x22\xf0"
+ "\xe2\x94\x96\x85\x4e\x6e\x8d\xba"
+ "\x81\x6e\x20\x9f\xca\x49\x9c\x67"
+ "\xce\xaa\x5f\x28\x52\x3a\x03\x62"
+ "\x84\x49\xa3\xfe\xbc\x6b\xb4\xb1"
+ "\x3e\x90\x03\xbf\x15\x6c\xbb\x6f"
+ "\xdd\xf0\x7b\xda\xf7\xef\x2d\x78"
+ "\x24\x84\x80\x91\x15\x37\xb0\x55"
+ "\x91\xe3\x28\xb1\x5f\x44\xfa\x60"
+ "\xa2\x02\xa4\xf3\x68\x9e\x61\x32"
+ "\x09\x34\x92\x3d\xa1\x6c\xc9\x7e"
+ "\xe9\x9f\x69\x4f\x96\x33\xd2\x3b"
+ "\x3e\x39\xec\xfc\x38\xbb\x4d\xbc"
+ "\x51\x19\x81\x5a\xa9\xf8\x64\x67"
+ "\x5c\xc2\xca\x2c\xcd\xa9\x1a\x64"
+ "\x32\x87\xf2\xfb\xb3\xf3\x74\xfc"
+ "\x69\xe1\x4b\x95\x3c\x8b\x8f\x3a"
+ "\xec\x7d\x51\xb5\xf2\x9c\x45\xfe"
+ "\x51\x0b\x14\xf0\x5e\x82\xd7\xfd"
+ "\x1e\x39\xae\x88\x91\xe8\x53\xf7"
+ "\x5d\x62\x02\xca\xef\x8d\x8b\x65"
+ "\x3e\xd0\xb9\x6d\xf5\x8e\x56\x8d"
+ "\x84\xc6\x57\xdb\x9a\x38\x18\x75"
+ "\xad\x0d\x0a\x42\x0c\x7a\x82\x0b"
+ "\xa3\x02\x84\x60\x84\xb0\x25\x3b"
+ "\xfe\x86\x3f\x05\x4c\x2b\xdf\x75"
+ "\x69\x54\x59\x29\x18\xec\x16\x78"
+ "\x42\x10\xce\xbd\x4e\xdd\xf2\xc0"
+ "\xf0\xe9\x88\xbb\x3d\xcc\x8a\x6f"
+ "\xf2\x44\x6e\x90\xb1\x68\x55\xa8"
+ "\x6c\x73\x55\xaf\xc5\xf6\xe0\x2c"
+ "\xf5\xd4\x33\x80\x2b\x00\x23\x40",
+ .rlen = 512,
+ .also_non_np = 1,
+ .np = 3,
+ .tap = { 512 - 20, 4, 16 },
+ }
+};
+
+static const struct cipher_testvec speck64_xts_dec_tv_template[] = {
+ {
+ .key = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .klen = 24,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x19\xd4\x7c\xa6\x84\xaf\x54\x07"
+ "\xab\x2b\xbb\x4a\x14\x85\x84\x8b"
+ "\xa7\xf3\x8d\x73\xd8\x8d\x15\xf2"
+ "\x1b\x80\xd6\xf7\xd7\x9f\x6b\x09",
+ .ilen = 32,
+ .result = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .rlen = 32,
+ }, {
+ .key = "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x11\x11\x11\x11\x11\x11\x11\x11"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 24,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x90\x76\x22\x16\x71\xbb\x7e\x7f"
+ "\x1a\xea\x2a\x0a\x7b\x64\xe5\x00"
+ "\xe4\xcc\xea\x57\xd7\xbd\xc1\xd4"
+ "\xf6\x00\xb0\x7d\xe7\x89\xc1\xd0",
+ .ilen = 32,
+ .result = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .rlen = 32,
+ }, {
+ .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+ "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+ "\x22\x22\x22\x22\x22\x22\x22\x22",
+ .klen = 24,
+ .iv = "\x33\x33\x33\x33\x33\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x51\x85\xe6\x16\x75\x2b\x8e\xd5"
+ "\xf7\xac\xbf\xcc\xa0\x13\x34\xfe"
+ "\x3f\x2d\xff\x66\x78\x0d\x08\xad"
+ "\x57\x62\xcf\xdb\x08\xdb\x00\xa8",
+ .ilen = 32,
+ .result = "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44"
+ "\x44\x44\x44\x44\x44\x44\x44\x44",
+ .rlen = 32,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x31\x41\x59\x26\x53\x58\x97\x93",
+ .klen = 24,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x9c\x9a\x9a\xf9\x76\xb1\x95\x75"
+ "\x46\x2d\x16\xfb\x72\xab\xf2\x14"
+ "\x97\x65\x3e\xd7\x92\x66\xc5\x8f"
+ "\x2d\x2d\x38\x77\x96\x81\xdf\x83"
+ "\xe2\xe1\xd4\x71\x3b\x96\x99\x2d"
+ "\x2c\x92\x87\xdd\x13\x7e\xd0\x7b"
+ "\x12\x54\x0b\x32\xea\xae\x67\xfa"
+ "\x47\xa0\x6d\xe9\x3e\x4e\x8e\x06"
+ "\xbc\xff\xeb\x4b\x2e\x2a\x6c\xf0"
+ "\x50\x0d\x2c\x86\xa7\x3d\x16\xd5"
+ "\xde\x3b\x66\xb2\x21\x9d\xc4\xa1"
+ "\x66\x24\x93\xa2\xe9\x2d\xcd\xf4"
+ "\x40\xfe\x2a\x77\xc1\xe7\xb2\x3d"
+ "\x66\xb1\x69\x4b\x9b\x5a\xc8\x29"
+ "\xc7\x44\x21\x63\x58\x8f\xfa\xe5"
+ "\x98\xc9\x44\x42\x8d\xa9\xb8\xd4"
+ "\x58\xcd\xfe\x27\x3f\xdb\x7e\x62"
+ "\x97\x58\x07\x2f\x25\x07\x88\x6b"
+ "\xae\x9e\x51\xe6\xc3\xa8\x1c\x31"
+ "\x7d\xdd\x6f\x78\x78\xb7\x8c\x4b"
+ "\x90\xbe\x4e\xbe\xa5\xe0\xc4\xe7"
+ "\x90\x27\x58\x8b\x5c\x94\x86\x0a"
+ "\x57\xa5\xae\x32\xa3\x70\xa7\x5f"
+ "\x99\x25\xec\x6e\x77\x29\xa6\xdb"
+ "\x96\xd6\x94\xb4\x3e\x6b\x86\x43"
+ "\x97\x95\x4c\xb5\x7a\x37\x77\x31"
+ "\x50\xa0\xec\xc3\x67\xb0\x45\x7f"
+ "\x2a\x23\x51\x6e\xc3\x92\x0b\x67"
+ "\x80\xbe\x92\x6c\xc0\xac\x29\xba"
+ "\x35\xd1\x8d\xb2\x3e\x57\xf1\x16"
+ "\x54\x01\xc4\xc4\x67\x8e\x31\xbb"
+ "\x63\xa0\x35\x30\xd9\xdd\x65\x5e"
+ "\x3a\xd7\x06\x02\x9b\x35\x93\x79"
+ "\x10\x14\xfa\x71\xb9\xc3\xb5\xb8"
+ "\xe3\xf0\x68\xfd\x2a\x57\xe8\x89"
+ "\x48\xd9\x87\xe9\x28\x81\x66\x53"
+ "\xab\xa1\xfe\xc5\x9b\xdf\xd9\x6e"
+ "\xc9\x61\xf3\x19\x13\x3a\xd6\x2f"
+ "\x2b\xa6\xbd\xae\x6c\x74\x64\x6a"
+ "\x31\x49\xee\x7e\xb5\xfa\x10\x7c"
+ "\x85\x1e\x7d\x9b\x92\x10\x27\x55"
+ "\x2c\x15\x58\x9c\x7f\x91\xaa\x02"
+ "\x82\x64\xe6\xaa\x5e\x31\xe5\x7d"
+ "\xd8\xb2\x15\x11\xfa\x8c\x3e\x6f"
+ "\x6c\x19\x99\xe0\x9d\x11\x6e\x9f"
+ "\xcc\xea\x71\x3b\x13\x4b\x0c\x8d"
+ "\x61\x76\x91\xab\xd3\x10\x23\x7d"
+ "\x03\x29\x87\x50\xb9\x3e\xbb\x90"
+ "\x10\x4a\x7d\x57\xa7\xd9\x5d\xce"
+ "\x40\x6b\xb8\x61\xdd\xe5\xaf\x68"
+ "\xef\x82\x68\x4b\xa8\x3d\x55\xfe"
+ "\xf4\xc0\xe5\xaf\x46\xba\xdd\xd7"
+ "\x26\x80\x0b\x67\x1f\xca\xb9\xa6"
+ "\x16\x89\x21\x72\xbd\x1a\xc2\x74"
+ "\xaa\x4b\x2b\x9b\x3a\xc9\x23\xa4"
+ "\x73\x93\x48\x69\x72\x38\xae\x74"
+ "\x2d\xe5\x45\x5e\xa3\xa9\x39\xf8"
+ "\xaf\x9a\x87\x4b\x3b\xdc\xf2\x80"
+ "\x6e\x57\x5a\x4f\xfd\x58\x89\x9f"
+ "\x6f\x37\x4d\x26\xbb\x05\x6f\xd5"
+ "\x74\x83\x3e\x9c\x14\xba\x1a\x8b"
+ "\x62\x8a\xb2\x70\xf8\xc1\x1d\x8d"
+ "\x7d\xc5\x73\x77\x7a\x61\xb4\x7a"
+ "\x80\x57\x43\xfe\x78\xfc\x96\xb6",
+ .ilen = 512,
+ .result = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .rlen = 512,
+ }, {
+ .key = "\x27\x18\x28\x18\x28\x45\x90\x45"
+ "\x23\x53\x60\x28\x74\x71\x35\x26"
+ "\x62\x49\x77\x57\x24\x70\x93\x69"
+ "\x99\x59\x57\x49\x66\x96\x76\x27",
+ .klen = 32,
+ .iv = "\xff\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\xac\x18\x5f\x9c\xe2\x40\x5c\x80"
+ "\xcb\x3d\x92\x20\xa3\xbd\x84\x4c"
+ "\xa3\xa7\xd1\x94\x53\x45\x7d\x87"
+ "\xa5\x0b\x7c\xaf\x0b\xf6\xdc\x66"
+ "\x93\xfa\x63\x45\x9e\xf3\xe3\x0e"
+ "\xe2\x34\x14\xac\xc4\xf5\xf1\xd1"
+ "\x52\xe1\x2f\xef\x41\x0b\x41\xba"
+ "\x20\x99\x44\xce\xbf\x29\x62\x16"
+ "\x41\x3c\x20\x21\x1d\xad\x14\x70"
+ "\xd4\xe5\xce\x4c\x2b\xe3\x6d\x06"
+ "\x44\x73\x55\xa8\x79\xd1\x4e\x42"
+ "\x0b\x6f\x66\x7f\xcc\xb9\x3b\x9b"
+ "\xb7\xa4\x99\x92\xfe\x67\xb1\xce"
+ "\xb5\x81\xcc\x34\x7f\x6f\xf9\x57"
+ "\x52\x26\x2a\x0f\xe9\x8b\x0b\x6d"
+ "\x36\xae\xd5\x07\x8c\x96\x4d\x63"
+ "\x65\xcf\x01\x4d\x64\x2f\x75\xbf"
+ "\xd4\xef\x8c\xff\x7f\xc9\x26\xef"
+ "\x1f\x55\x5b\x79\x46\x31\xc0\x2f"
+ "\xe7\x38\x1e\x5e\x97\x1c\x31\xc0"
+ "\x9d\x47\x13\xaf\x62\x0e\x71\x51"
+ "\xc3\xd3\x59\xa8\xee\x17\x07\x24"
+ "\x11\x07\xf0\x8e\xc9\xd8\x3c\xd4"
+ "\x5f\x24\xa0\x7b\x25\x51\x43\xc8"
+ "\xb2\xf2\x3d\xf7\x6b\xb2\x11\x64"
+ "\x46\x65\xe0\x59\xd8\x6c\xa4\x9c"
+ "\x2e\x42\xfc\x91\xd5\x32\xe0\x34"
+ "\x1b\xea\x5c\xf1\x0a\x99\x64\x3d"
+ "\xb6\xf0\xb6\x5e\x03\x55\x69\x5b"
+ "\x35\x37\x54\xda\x47\x7f\xc7\x4c"
+ "\xbb\xb5\x4b\x6f\xd9\x1f\x47\xcd"
+ "\xbe\x07\xce\xef\x30\x49\xce\xc1"
+ "\x3f\x16\x4d\x6d\xb6\x5c\x22\xf0"
+ "\xe2\x94\x96\x85\x4e\x6e\x8d\xba"
+ "\x81\x6e\x20\x9f\xca\x49\x9c\x67"
+ "\xce\xaa\x5f\x28\x52\x3a\x03\x62"
+ "\x84\x49\xa3\xfe\xbc\x6b\xb4\xb1"
+ "\x3e\x90\x03\xbf\x15\x6c\xbb\x6f"
+ "\xdd\xf0\x7b\xda\xf7\xef\x2d\x78"
+ "\x24\x84\x80\x91\x15\x37\xb0\x55"
+ "\x91\xe3\x28\xb1\x5f\x44\xfa\x60"
+ "\xa2\x02\xa4\xf3\x68\x9e\x61\x32"
+ "\x09\x34\x92\x3d\xa1\x6c\xc9\x7e"
+ "\xe9\x9f\x69\x4f\x96\x33\xd2\x3b"
+ "\x3e\x39\xec\xfc\x38\xbb\x4d\xbc"
+ "\x51\x19\x81\x5a\xa9\xf8\x64\x67"
+ "\x5c\xc2\xca\x2c\xcd\xa9\x1a\x64"
+ "\x32\x87\xf2\xfb\xb3\xf3\x74\xfc"
+ "\x69\xe1\x4b\x95\x3c\x8b\x8f\x3a"
+ "\xec\x7d\x51\xb5\xf2\x9c\x45\xfe"
+ "\x51\x0b\x14\xf0\x5e\x82\xd7\xfd"
+ "\x1e\x39\xae\x88\x91\xe8\x53\xf7"
+ "\x5d\x62\x02\xca\xef\x8d\x8b\x65"
+ "\x3e\xd0\xb9\x6d\xf5\x8e\x56\x8d"
+ "\x84\xc6\x57\xdb\x9a\x38\x18\x75"
+ "\xad\x0d\x0a\x42\x0c\x7a\x82\x0b"
+ "\xa3\x02\x84\x60\x84\xb0\x25\x3b"
+ "\xfe\x86\x3f\x05\x4c\x2b\xdf\x75"
+ "\x69\x54\x59\x29\x18\xec\x16\x78"
+ "\x42\x10\xce\xbd\x4e\xdd\xf2\xc0"
+ "\xf0\xe9\x88\xbb\x3d\xcc\x8a\x6f"
+ "\xf2\x44\x6e\x90\xb1\x68\x55\xa8"
+ "\x6c\x73\x55\xaf\xc5\xf6\xe0\x2c"
+ "\xf5\xd4\x33\x80\x2b\x00\x23\x40",
+ .ilen = 512,
+ .result = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+ "\x20\x21\x22\x23\x24\x25\x26\x27"
+ "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+ "\x30\x31\x32\x33\x34\x35\x36\x37"
+ "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+ "\x40\x41\x42\x43\x44\x45\x46\x47"
+ "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+ "\x50\x51\x52\x53\x54\x55\x56\x57"
+ "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+ "\x60\x61\x62\x63\x64\x65\x66\x67"
+ "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+ "\x70\x71\x72\x73\x74\x75\x76\x77"
+ "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+ "\x80\x81\x82\x83\x84\x85\x86\x87"
+ "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+ "\x90\x91\x92\x93\x94\x95\x96\x97"
+ "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+ "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+ "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+ "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+ "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+ "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+ "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+ "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+ "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+ "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+ "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+ "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+ "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+ .rlen = 512,
+ .also_non_np = 1,
+ .np = 3,
+ .tap = { 512 - 20, 4, 16 },
+ }
+};
+
/* Cast6 test vectors from RFC 2612 */
static const struct cipher_testvec cast6_enc_tv_template[] = {
{
--
2.16.0.rc1.238.g530d649a79-goog

2018-02-08 00:11:05

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 2/5] crypto: speck - export common helpers

Export the Speck constants and transform context and the ->setkey(),
->encrypt(), and ->decrypt() functions so that they can be reused by the
ARM NEON implementation of Speck-XTS. The generic key expansion code
will be reused because it is not performance-critical and is not
vectorizable, while the generic encryption and decryption functions are
needed as fallbacks and for the XTS tweak encryption.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/speck.c | 90 +++++++++++++++++++++++++++-----------------------
include/crypto/speck.h | 62 ++++++++++++++++++++++++++++++++++
2 files changed, 111 insertions(+), 41 deletions(-)
create mode 100644 include/crypto/speck.h

diff --git a/crypto/speck.c b/crypto/speck.c
index 89860688bf00..c78c8a782b0c 100644
--- a/crypto/speck.c
+++ b/crypto/speck.c
@@ -19,6 +19,7 @@
*/

#include <asm/unaligned.h>
+#include <crypto/speck.h>
#include <linux/bitops.h>
#include <linux/crypto.h>
#include <linux/init.h>
@@ -26,22 +27,6 @@

/* Speck128 */

-#define SPECK128_BLOCK_SIZE 16
-
-#define SPECK128_128_KEY_SIZE 16
-#define SPECK128_128_NROUNDS 32
-
-#define SPECK128_192_KEY_SIZE 24
-#define SPECK128_192_NROUNDS 33
-
-#define SPECK128_256_KEY_SIZE 32
-#define SPECK128_256_NROUNDS 34
-
-struct speck128_tfm_ctx {
- u64 round_keys[SPECK128_256_NROUNDS];
- int nrounds;
-};
-
static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
{
*x = ror64(*x, 8);
@@ -60,9 +45,9 @@ static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
*x = rol64(*x, 8);
}

-static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+ u8 *out, const u8 *in)
{
- const struct speck128_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
u64 x = get_unaligned_le64(in + 0);
u64 y = get_unaligned_le64(in + 8);
int i;
@@ -73,10 +58,16 @@ static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
put_unaligned_le64(x, out + 0);
put_unaligned_le64(y, out + 8);
}
+EXPORT_SYMBOL_GPL(crypto_speck128_encrypt);

-static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in);
+}
+
+void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+ u8 *out, const u8 *in)
{
- const struct speck128_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
u64 x = get_unaligned_le64(in + 0);
u64 y = get_unaligned_le64(in + 8);
int i;
@@ -87,11 +78,16 @@ static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
put_unaligned_le64(x, out + 0);
put_unaligned_le64(y, out + 8);
}
+EXPORT_SYMBOL_GPL(crypto_speck128_decrypt);

-static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
+static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in);
+}
+
+int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
unsigned int keylen)
{
- struct speck128_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
u64 l[3];
u64 k;
int i;
@@ -133,21 +129,15 @@ static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,

return 0;
}
+EXPORT_SYMBOL_GPL(crypto_speck128_setkey);

-/* Speck64 */
-
-#define SPECK64_BLOCK_SIZE 8
-
-#define SPECK64_96_KEY_SIZE 12
-#define SPECK64_96_NROUNDS 26
-
-#define SPECK64_128_KEY_SIZE 16
-#define SPECK64_128_NROUNDS 27
+static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen);
+}

-struct speck64_tfm_ctx {
- u32 round_keys[SPECK64_128_NROUNDS];
- int nrounds;
-};
+/* Speck64 */

static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
{
@@ -167,9 +157,9 @@ static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
*x = rol32(*x, 8);
}

-static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+ u8 *out, const u8 *in)
{
- const struct speck64_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
u32 x = get_unaligned_le32(in + 0);
u32 y = get_unaligned_le32(in + 4);
int i;
@@ -180,10 +170,16 @@ static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
put_unaligned_le32(x, out + 0);
put_unaligned_le32(y, out + 4);
}
+EXPORT_SYMBOL_GPL(crypto_speck64_encrypt);

-static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in);
+}
+
+void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+ u8 *out, const u8 *in)
{
- const struct speck64_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
u32 x = get_unaligned_le32(in + 0);
u32 y = get_unaligned_le32(in + 4);
int i;
@@ -194,11 +190,16 @@ static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
put_unaligned_le32(x, out + 0);
put_unaligned_le32(y, out + 4);
}
+EXPORT_SYMBOL_GPL(crypto_speck64_decrypt);

-static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
+static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+ crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in);
+}
+
+int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
unsigned int keylen)
{
- struct speck64_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
u32 l[3];
u32 k;
int i;
@@ -231,6 +232,13 @@ static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,

return 0;
}
+EXPORT_SYMBOL_GPL(crypto_speck64_setkey);
+
+static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen);
+}

/* Algorithm definitions */

diff --git a/include/crypto/speck.h b/include/crypto/speck.h
new file mode 100644
index 000000000000..73cfc952d405
--- /dev/null
+++ b/include/crypto/speck.h
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common values for the Speck algorithm
+ */
+
+#ifndef _CRYPTO_SPECK_H
+#define _CRYPTO_SPECK_H
+
+#include <linux/types.h>
+
+/* Speck128 */
+
+#define SPECK128_BLOCK_SIZE 16
+
+#define SPECK128_128_KEY_SIZE 16
+#define SPECK128_128_NROUNDS 32
+
+#define SPECK128_192_KEY_SIZE 24
+#define SPECK128_192_NROUNDS 33
+
+#define SPECK128_256_KEY_SIZE 32
+#define SPECK128_256_NROUNDS 34
+
+struct speck128_tfm_ctx {
+ u64 round_keys[SPECK128_256_NROUNDS];
+ int nrounds;
+};
+
+void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+ u8 *out, const u8 *in);
+
+void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+ u8 *out, const u8 *in);
+
+int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+ unsigned int keysize);
+
+/* Speck64 */
+
+#define SPECK64_BLOCK_SIZE 8
+
+#define SPECK64_96_KEY_SIZE 12
+#define SPECK64_96_NROUNDS 26
+
+#define SPECK64_128_KEY_SIZE 16
+#define SPECK64_128_NROUNDS 27
+
+struct speck64_tfm_ctx {
+ u32 round_keys[SPECK64_128_NROUNDS];
+ int nrounds;
+};
+
+void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+ u8 *out, const u8 *in);
+
+void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+ u8 *out, const u8 *in);
+
+int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+ unsigned int keysize);
+
+#endif /* _CRYPTO_SPECK_H */
--
2.16.0.rc1.238.g530d649a79-goog

2018-02-08 01:47:06

by Jeffrey Walton

[permalink] [raw]
Subject: Re: [PATCH 0/5] crypto: Speck support

On Wed, Feb 7, 2018 at 7:09 PM, Eric Biggers <[email protected]> wrote:
> Hello,
>
> This series adds Speck support to the crypto API, including the Speck128
> and Speck64 variants. Speck is a lightweight block cipher that can be
> much faster than AES on processors that don't have AES instructions.
>
> We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an
> option for dm-crypt and fscrypt on Android, for low-end mobile devices
> with older CPUs such as ARMv7 which don't have the Cryptography
> Extensions. Currently, such devices are unencrypted because AES is not
> fast enough, even when the NEON bit-sliced implementation of AES is
> used. Other AES alternatives such as Blowfish, Twofish, Camellia,
> Cast6, and Serpent aren't fast enough either; it seems that only a
> modern ARX cipher can provide sufficient performance on these devices.
>
> This is a replacement for our original proposal
> (https://patchwork.kernel.org/patch/10101451/) which was to offer
> ChaCha20 for these devices. However, the use of a stream cipher for
> disk/file encryption with no space to store nonces would have been much
> more insecure than we thought initially, given that it would be used on
> top of flash storage as well as potentially on top of F2FS, neither of
> which is guaranteed to overwrite data in-place.
>
> Speck has been somewhat controversial due to its origin. Nevertheless,
> it has a straightforward design (it's an ARX cipher), and it appears to
> be the leading software-optimized lightweight block cipher currently,
> with the most cryptanalysis. It's also easy to implement without side
> channels, unlike AES. Moreover, we only intend Speck to be used when
> the status quo is no encryption, due to AES not being fast enough.
>
> We've also considered a novel length-preserving encryption mode based on
> ChaCha20 and Poly1305. While theoretically attractive, such a mode
> would be a brand new crypto construction and would be more complicated
> and difficult to implement efficiently in comparison to Speck-XTS.
>
> Thus, patch 1 adds a generic implementation of Speck, and the following
> patches add a 32-bit ARM NEON implementation of Speck-XTS. The
> NEON-accelerated implementation is much faster than the generic
> implementation and therefore is the implementation that would primarily
> be used in practice on the devices we are targeting.
>
> There is no AArch64 implementation added, since such CPUs are likely to
> have the Cryptography Extensions, allowing the use of AES.

+1 on SPECK.

Its a nice cipher that runs fast. It is nice because the security
engineering and parameter selection is well specified, and you can
push the margins as low as you like. It does not guess at security
parameters like some of the other ciphers used in dm-crypt.

On a modern Core-i5 6th gen I've seen numbers as low as ...
SPECK-64/128 runs around 2.1 cpb, and SPECK-128/256 runs around 2.4
cpb.

I've already done some work for a US contractor who wanted/needed
SPECK for a possible NASA contract. NASA is looking at SPECK for some
satellite comms.

Jeff

2018-02-08 21:01:07

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 0/5] crypto: Speck support

On Wed, Feb 07, 2018 at 08:47:05PM -0500, Jeffrey Walton wrote:
> On Wed, Feb 7, 2018 at 7:09 PM, Eric Biggers <[email protected]> wrote:
> > Hello,
> >
> > This series adds Speck support to the crypto API, including the Speck128
> > and Speck64 variants. Speck is a lightweight block cipher that can be
> > much faster than AES on processors that don't have AES instructions.
> >
> > We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an
> > option for dm-crypt and fscrypt on Android, for low-end mobile devices
> > with older CPUs such as ARMv7 which don't have the Cryptography
> > Extensions. Currently, such devices are unencrypted because AES is not
> > fast enough, even when the NEON bit-sliced implementation of AES is
> > used. Other AES alternatives such as Blowfish, Twofish, Camellia,
> > Cast6, and Serpent aren't fast enough either; it seems that only a
> > modern ARX cipher can provide sufficient performance on these devices.
> >
> > This is a replacement for our original proposal
> > (https://patchwork.kernel.org/patch/10101451/) which was to offer
> > ChaCha20 for these devices. However, the use of a stream cipher for
> > disk/file encryption with no space to store nonces would have been much
> > more insecure than we thought initially, given that it would be used on
> > top of flash storage as well as potentially on top of F2FS, neither of
> > which is guaranteed to overwrite data in-place.
> >
> > Speck has been somewhat controversial due to its origin. Nevertheless,
> > it has a straightforward design (it's an ARX cipher), and it appears to
> > be the leading software-optimized lightweight block cipher currently,
> > with the most cryptanalysis. It's also easy to implement without side
> > channels, unlike AES. Moreover, we only intend Speck to be used when
> > the status quo is no encryption, due to AES not being fast enough.
> >
> > We've also considered a novel length-preserving encryption mode based on
> > ChaCha20 and Poly1305. While theoretically attractive, such a mode
> > would be a brand new crypto construction and would be more complicated
> > and difficult to implement efficiently in comparison to Speck-XTS.
> >
> > Thus, patch 1 adds a generic implementation of Speck, and the following
> > patches add a 32-bit ARM NEON implementation of Speck-XTS. The
> > NEON-accelerated implementation is much faster than the generic
> > implementation and therefore is the implementation that would primarily
> > be used in practice on the devices we are targeting.
> >
> > There is no AArch64 implementation added, since such CPUs are likely to
> > have the Cryptography Extensions, allowing the use of AES.
>
> +1 on SPECK.
>
> Its a nice cipher that runs fast. It is nice because the security
> engineering and parameter selection is well specified, and you can
> push the margins as low as you like. It does not guess at security
> parameters like some of the other ciphers used in dm-crypt.
>
> On a modern Core-i5 6th gen I've seen numbers as low as ...
> SPECK-64/128 runs around 2.1 cpb, and SPECK-128/256 runs around 2.4
> cpb.
>
> I've already done some work for a US contractor who wanted/needed
> SPECK for a possible NASA contract. NASA is looking at SPECK for some
> satellite comms.
>

Hi Jeffrey,

I see you wrote the SPECK implementation in Crypto++, and you are treating the
words as big endian.

Do you have a reference for this being the "correct" order? Unfortunately the
authors of the cipher failed to mention the byte order in their paper. And they
gave the test vectors as words, so the test vectors don't clarify it either.

I had assumed little endian words, but now I am having second thoughts... And
to confuse things further, it seems that some implementations (including the
authors own implementation for the SUPERCOP benchmark toolkit [1]) even consider
the words themselves in the order (y, x) rather than the more intuitive (x, y).

[1] https://github.com/iadgov/simon-speck-supercop/blob/master/crypto_stream/speck128128ctr/ref/stream.c

In fact, even the reference code from the paper treats pt[0] as y and pt[1] as
x, where 'pt' is a u64 array -- although that being said, it's not shown how the
actual bytes should be translated to/from those u64 arrays.

I'd really like to avoid people having to add additional versions of SPECK later
for the different byte and word orders...

- Eric

2018-02-10 00:07:02

by Jeffrey Walton

[permalink] [raw]
Subject: Re: [PATCH 0/5] crypto: Speck support

On Thu, Feb 8, 2018 at 4:01 PM, Eric Biggers <[email protected]> wrote:
> On Wed, Feb 07, 2018 at 08:47:05PM -0500, Jeffrey Walton wrote:
>> On Wed, Feb 7, 2018 at 7:09 PM, Eric Biggers <[email protected]> wrote:
>> > Hello,
>> >
>> > This series adds Speck support to the crypto API, including the Speck128
>> > and Speck64 variants. Speck is a lightweight block cipher that can be
>> > much faster than AES on processors that don't have AES instructions.
>> >
>> > We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an
>> > option for dm-crypt and fscrypt on Android, for low-end mobile devices
>> > with older CPUs such as ARMv7 which don't have the Cryptography
>> > Extensions. Currently, such devices are unencrypted because AES is not
>> > fast enough, even when the NEON bit-sliced implementation of AES is
>> > used. Other AES alternatives such as Blowfish, Twofish, Camellia,
>> > Cast6, and Serpent aren't fast enough either; it seems that only a
>> > modern ARX cipher can provide sufficient performance on these devices.
>> >
>> > This is a replacement for our original proposal
>> > (https://patchwork.kernel.org/patch/10101451/) which was to offer
>> > ChaCha20 for these devices. However, the use of a stream cipher for
>> > disk/file encryption with no space to store nonces would have been much
>> > more insecure than we thought initially, given that it would be used on
>> > top of flash storage as well as potentially on top of F2FS, neither of
>> > which is guaranteed to overwrite data in-place.
>> >
>> > ...
>> > Thus, patch 1 adds a generic implementation of Speck, and the following
>> > patches add a 32-bit ARM NEON implementation of Speck-XTS. The
>> > NEON-accelerated implementation is much faster than the generic
>> > implementation and therefore is the implementation that would primarily
>> > be used in practice on the devices we are targeting.
>> >
>> > There is no AArch64 implementation added, since such CPUs are likely to
>> > have the Cryptography Extensions, allowing the use of AES.
>>
>> +1 on SPECK.
>> ...
>
> Hi Jeffrey,
>
> I see you wrote the SPECK implementation in Crypto++, and you are treating the
> words as big endian.
>
> Do you have a reference for this being the "correct" order? Unfortunately the
> authors of the cipher failed to mention the byte order in their paper. And they
> gave the test vectors as words, so the test vectors don't clarify it either.
>
> I had assumed little endian words, but now I am having second thoughts... And
> to confuse things further, it seems that some implementations (including the
> authors own implementation for the SUPERCOP benchmark toolkit [1]) even consider
> the words themselves in the order (y, x) rather than the more intuitive (x, y).
>
> [1] https://github.com/iadgov/simon-speck-supercop/blob/master/crypto_stream/speck128128ctr/ref/stream.c
>
> In fact, even the reference code from the paper treats pt[0] as y and pt[1] as
> x, where 'pt' is a u64 array -- although that being said, it's not shown how the
> actual bytes should be translated to/from those u64 arrays.
>
> I'd really like to avoid people having to add additional versions of SPECK later
> for the different byte and word orders...

Hi Eric,

Yeah, this was a point of confusion for us as well. After the sidebar
conversations I am wondering about the correctness of Crypto++
implementation.

As a first step here is the official test vector for Speck-128(128)
from Appendix C, p. 42 (https://eprint.iacr.org/2013/404.pdf):

Speck128/128
Key: 0f0e0d0c0b0a0908 0706050403020100
Plaintext: 6c61766975716520 7469206564616d20
Ciphertext: a65d985179783265 7860fedf5c570d18

We had some confusion over the presentation. Here is what the Simon
and Speck team sent when I asked about it, what gets plugged into the
algorithm, and how it gets plugged in:

<BEGIN SNIP PERSONAL EMAIL>

On Mon, Nov 20, 2017 at 10:50 AM, <[email protected]> wrote:
> ...
> I'll explain the problem you have been having with our test vectors.
>
> The key is: 0x0f0e0d0c0b0a0908 0x0706050403020100
> The plaintext is: 6c61766975716520 7469206564616d20
> The ciphertext is: a65d985179783265 7860fedf5c570d18
>
> The problem is essentially one of what goes where and we probably could
> have done a better job explaining things.
>
> For the key, with two words, K=(K[1],K[0]). With three words K=(K[2],K[1],K[0]),
> with four words K=(K[3],K[2],K[1],K[0]).
>
> So for the test vector you should have K[0]= 0x0706050403020100, K[1]= 0x0f0e0d0c0b0a0908
> which is the opposite of what you have done.
>
> If we put this K into ExpandKey(K,sk) then the first few round keys
> are:
>
> 0706050403020100
> 37253b31171d0309
> f91d89cc90c4085c
> c6b1f07852cc7689
> ...
>
> For the plaintext, P=(P[1],P[0]), i.e., P[1] goes into the left word of the block cipher
> and P[0] goes into the right word of the block cipher. So you should have
> m[0]= 7469206564616d20 and m[1]= 6c61766975716520, which is again opposite of what you
> have. If c[0]=m[0] and c[1]=m[1], then the encrypt function should be called as
> Encrypt(c+1,c+0,sk). The resulting ciphertext is (c+1,c+0).
>
> In general, everything goes in as a byte stream (not 64-bit words). In this case,
> if the 16 bytes of key are 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f then
> we first need to turn these into two 64-bit words. The first word, K[0] is
> 0706050403020100 and the second word is K[1]=0f0e0d0c0b0a0908. On x86 processors,
> if the key bytes are in the array k[] of type uint8_t, then a simple
> casting should get you K[1] and K[0]. That is, K[0]=((uint64_t *)k)[0] and
> K[1]=((uint64_t *)k)[1]. The key expansion is run with ExpandKey(K,sk).
> So that was what we had in mind.
>
> Similarly, if the plaintext "bytes" were: 20 6d 61 64 65 20 69 74 20 65 71 75 69 76 61 6c
> (which is ascii for " made it equival")
> then we first need to turn these into two 64-bit words. The first word is
> pt[0]=7469206564616d20 and the second word is pt[1]= 6c61766975716520. The Encrypt
> routine is run as Encrypt(pt[1],pt[0],sk). The resulting ciphertext is (ct[1],ct[0])
> which you would turn back into a byte stream in the reverse of what we did. For
> example, if ct[1]= a65d985179783265 and ct[0]= 7860fedf5c570d18, then if c[] is a
> byte stream (i.e., of type uint8_t), then we would have c[0]=18, c[1]=0d, c[2]=57, c[3]=5c, etc.
> This is easy to do on x86 with a cast.
>
> Let me know if all of this is now clear.
>
> Best regards,

<END SNIP PERSONAL EMAIL>

Finally, here is the Simon and Speck team GitHub. It is the
implementation the team submitted to SUPERCOP:
https://github.com/iadgov/simon-speck-supercop . The files of interest
are in crypto_stream:

* speck128128ctr/ref/stream.c
* speck128192ctr/ref/stream.c
* speck128256ctr/ref/stream.c
* speck64128ctr/ref/stream.c
* speck6496ctr/ref/stream.c

The reference implementation uses CTR mode rather then ECB mode. After
the CTR -> ECB conversion (mostly trivial), and then plugging in the
test vectors data (key and message), we could not arrive at the same
answer as the published test vector. However, we may have been
plugging things in incorrectly. We were able to arrive at test vector
results after a minor modification, but it may have been the wrong
thing to do (in hindsight). I'll be revisiting this shortly.

Hopefully this should get things started on sorting out the details.

Jeff

2018-02-12 19:19:08

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 0/5] crypto: Speck support

Hi all,

On Fri, Feb 09, 2018 at 07:07:01PM -0500, Jeffrey Walton wrote:
> > Hi Jeffrey,
> >
> > I see you wrote the SPECK implementation in Crypto++, and you are treating the
> > words as big endian.
> >
> > Do you have a reference for this being the "correct" order? Unfortunately the
> > authors of the cipher failed to mention the byte order in their paper. And they
> > gave the test vectors as words, so the test vectors don't clarify it either.
> >
> > I had assumed little endian words, but now I am having second thoughts... And
> > to confuse things further, it seems that some implementations (including the
> > authors own implementation for the SUPERCOP benchmark toolkit [1]) even consider
> > the words themselves in the order (y, x) rather than the more intuitive (x, y).
> >
> > [1] https://github.com/iadgov/simon-speck-supercop/blob/master/crypto_stream/speck128128ctr/ref/stream.c
> >
> > In fact, even the reference code from the paper treats pt[0] as y and pt[1] as
> > x, where 'pt' is a u64 array -- although that being said, it's not shown how the
> > actual bytes should be translated to/from those u64 arrays.
> >
> > I'd really like to avoid people having to add additional versions of SPECK later
> > for the different byte and word orders...
>
> Hi Eric,
>
> Yeah, this was a point of confusion for us as well. After the sidebar
> conversations I am wondering about the correctness of Crypto++
> implementation.
>

We've received another response from one of the Speck creators (Louis Wingers)
that (to summarize) the intended byte order is little endian, and the intended
word order is (y, x), i.e. 'y' is at a lower memory address than 'x'. Or
equivalently: the test vectors given in the original paper need to be read as
byte arrays from *right-to-left*.

(y, x) is not the intuitive order, but it's not a huge deal. The more important
thing is that we don't end up with multiple implementations with different byte
and/or word orders.

So, barring any additional confusion, I'll send a revised version of this
patchset that flips the word order. Jeff would need to flip both the byte and
word orders in his implementation in Crypto++ as well.

- Eric

> As a first step here is the official test vector for Speck-128(128)
> from Appendix C, p. 42 (https://eprint.iacr.org/2013/404.pdf):
>
> Speck128/128
> Key: 0f0e0d0c0b0a0908 0706050403020100
> Plaintext: 6c61766975716520 7469206564616d20
> Ciphertext: a65d985179783265 7860fedf5c570d18
>
> We had some confusion over the presentation. Here is what the Simon
> and Speck team sent when I asked about it, what gets plugged into the
> algorithm, and how it gets plugged in:
>
> <BEGIN SNIP PERSONAL EMAIL>
>
> On Mon, Nov 20, 2017 at 10:50 AM, <[email protected]> wrote:
> > ...
> > I'll explain the problem you have been having with our test vectors.
> >
> > The key is: 0x0f0e0d0c0b0a0908 0x0706050403020100
> > The plaintext is: 6c61766975716520 7469206564616d20
> > The ciphertext is: a65d985179783265 7860fedf5c570d18
> >
> > The problem is essentially one of what goes where and we probably could
> > have done a better job explaining things.
> >
> > For the key, with two words, K=(K[1],K[0]). With three words K=(K[2],K[1],K[0]),
> > with four words K=(K[3],K[2],K[1],K[0]).
> >
> > So for the test vector you should have K[0]= 0x0706050403020100, K[1]= 0x0f0e0d0c0b0a0908
> > which is the opposite of what you have done.
> >
> > If we put this K into ExpandKey(K,sk) then the first few round keys
> > are:
> >
> > 0706050403020100
> > 37253b31171d0309
> > f91d89cc90c4085c
> > c6b1f07852cc7689
> > ...
> >
> > For the plaintext, P=(P[1],P[0]), i.e., P[1] goes into the left word of the block cipher
> > and P[0] goes into the right word of the block cipher. So you should have
> > m[0]= 7469206564616d20 and m[1]= 6c61766975716520, which is again opposite of what you
> > have. If c[0]=m[0] and c[1]=m[1], then the encrypt function should be called as
> > Encrypt(c+1,c+0,sk). The resulting ciphertext is (c+1,c+0).
> >
> > In general, everything goes in as a byte stream (not 64-bit words). In this case,
> > if the 16 bytes of key are 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f then
> > we first need to turn these into two 64-bit words. The first word, K[0] is
> > 0706050403020100 and the second word is K[1]=0f0e0d0c0b0a0908. On x86 processors,
> > if the key bytes are in the array k[] of type uint8_t, then a simple
> > casting should get you K[1] and K[0]. That is, K[0]=((uint64_t *)k)[0] and
> > K[1]=((uint64_t *)k)[1]. The key expansion is run with ExpandKey(K,sk).
> > So that was what we had in mind.
> >
> > Similarly, if the plaintext "bytes" were: 20 6d 61 64 65 20 69 74 20 65 71 75 69 76 61 6c
> > (which is ascii for " made it equival")
> > then we first need to turn these into two 64-bit words. The first word is
> > pt[0]=7469206564616d20 and the second word is pt[1]= 6c61766975716520. The Encrypt
> > routine is run as Encrypt(pt[1],pt[0],sk). The resulting ciphertext is (ct[1],ct[0])
> > which you would turn back into a byte stream in the reverse of what we did. For
> > example, if ct[1]= a65d985179783265 and ct[0]= 7860fedf5c570d18, then if c[] is a
> > byte stream (i.e., of type uint8_t), then we would have c[0]=18, c[1]=0d, c[2]=57, c[3]=5c, etc.
> > This is easy to do on x86 with a cast.
> >
> > Let me know if all of this is now clear.
> >
> > Best regards,
>
> <END SNIP PERSONAL EMAIL>
>
> Finally, here is the Simon and Speck team GitHub. It is the
> implementation the team submitted to SUPERCOP:
> https://github.com/iadgov/simon-speck-supercop . The files of interest
> are in crypto_stream:
>
> * speck128128ctr/ref/stream.c
> * speck128192ctr/ref/stream.c
> * speck128256ctr/ref/stream.c
> * speck64128ctr/ref/stream.c
> * speck6496ctr/ref/stream.c
>
> The reference implementation uses CTR mode rather then ECB mode. After
> the CTR -> ECB conversion (mostly trivial), and then plugging in the
> test vectors data (key and message), we could not arrive at the same
> answer as the published test vector. However, we may have been
> plugging things in incorrectly. We were able to arrive at test vector
> results after a minor modification, but it may have been the wrong
> thing to do (in hindsight). I'll be revisiting this shortly.
>
> Hopefully this should get things started on sorting out the details.
>
> Jeff

2018-02-12 19:57:07

by Jeffrey Walton

[permalink] [raw]
Subject: Re: [PATCH 0/5] crypto: Speck support

On Mon, Feb 12, 2018 at 2:19 PM, Eric Biggers <[email protected]> wrote:
> Hi all,
>
> On Fri, Feb 09, 2018 at 07:07:01PM -0500, Jeffrey Walton wrote:
>> > Hi Jeffrey,
>> >
>> > I see you wrote the SPECK implementation in Crypto++, and you are treating the
>> > words as big endian.
>> >
>> > Do you have a reference for this being the "correct" order? Unfortunately the
>> > authors of the cipher failed to mention the byte order in their paper. And they
>> > gave the test vectors as words, so the test vectors don't clarify it either.
>> >
>> > I had assumed little endian words, but now I am having second thoughts... And
>> > to confuse things further, it seems that some implementations (including the
>> > authors own implementation for the SUPERCOP benchmark toolkit [1]) even consider
>> > the words themselves in the order (y, x) rather than the more intuitive (x, y).
>> >
>> > [1] https://github.com/iadgov/simon-speck-supercop/blob/master/crypto_stream/speck128128ctr/ref/stream.c
>> >
>> > In fact, even the reference code from the paper treats pt[0] as y and pt[1] as
>> > x, where 'pt' is a u64 array -- although that being said, it's not shown how the
>> > actual bytes should be translated to/from those u64 arrays.
>> >
>> > I'd really like to avoid people having to add additional versions of SPECK later
>> > for the different byte and word orders...
>>
>> Hi Eric,
>>
>> Yeah, this was a point of confusion for us as well. After the sidebar
>> conversations I am wondering about the correctness of Crypto++
>> implementation.
>>
>
> We've received another response from one of the Speck creators (Louis Wingers)
> that (to summarize) the intended byte order is little endian, and the intended
> word order is (y, x), i.e. 'y' is at a lower memory address than 'x'. Or
> equivalently: the test vectors given in the original paper need to be read as
> byte arrays from *right-to-left*.
>
> (y, x) is not the intuitive order, but it's not a huge deal. The more important
> thing is that we don't end up with multiple implementations with different byte
> and/or word orders.
>
> So, barring any additional confusion, I'll send a revised version of this
> patchset that flips the word order. Jeff would need to flip both the byte and
> word orders in his implementation in Crypto++ as well.

Thanks Eric.

Yeah, the (y,x) explains a lot of the confusion, and explains the
modification I needed in my GitHub clone of the IAD Team's SUPERCOP to
arrive at test vector results. My clone is available at
https://github.com/noloader/simon-speck-supercop.

So let me ask you... Given the Speck-128(128) test vector from Appendix C:

Key: 0f0e0d0c0b0a0908 0706050403020100
Plaintext: 6c61766975716520 7469206564616d20
Ciphertext: a65d985179783265 7860fedf5c570d18

Will the Linux implementation arrive at the published result, or will
it arrive at a different result? I guess what I am asking, where is
the presentation detail going to be handled?

A related question is, will the kernel be parsing just the key as
(y,x), or will all parameters be handled as (y,x)? At this point I
believe it only needs to apply to the key but I did not investigate
the word swapping in detail because I was chasing the test vector.

Jeff

2018-02-12 20:18:53

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 0/5] crypto: Speck support

Hi Jeff,

On Mon, Feb 12, 2018 at 02:57:06PM -0500, Jeffrey Walton wrote:
> On Mon, Feb 12, 2018 at 2:19 PM, Eric Biggers <[email protected]> wrote:
> > Hi all,
> >
> > On Fri, Feb 09, 2018 at 07:07:01PM -0500, Jeffrey Walton wrote:
> >> > Hi Jeffrey,
> >> >
> >> > I see you wrote the SPECK implementation in Crypto++, and you are treating the
> >> > words as big endian.
> >> >
> >> > Do you have a reference for this being the "correct" order? Unfortunately the
> >> > authors of the cipher failed to mention the byte order in their paper. And they
> >> > gave the test vectors as words, so the test vectors don't clarify it either.
> >> >
> >> > I had assumed little endian words, but now I am having second thoughts... And
> >> > to confuse things further, it seems that some implementations (including the
> >> > authors own implementation for the SUPERCOP benchmark toolkit [1]) even consider
> >> > the words themselves in the order (y, x) rather than the more intuitive (x, y).
> >> >
> >> > [1] https://github.com/iadgov/simon-speck-supercop/blob/master/crypto_stream/speck128128ctr/ref/stream.c
> >> >
> >> > In fact, even the reference code from the paper treats pt[0] as y and pt[1] as
> >> > x, where 'pt' is a u64 array -- although that being said, it's not shown how the
> >> > actual bytes should be translated to/from those u64 arrays.
> >> >
> >> > I'd really like to avoid people having to add additional versions of SPECK later
> >> > for the different byte and word orders...
> >>
> >> Hi Eric,
> >>
> >> Yeah, this was a point of confusion for us as well. After the sidebar
> >> conversations I am wondering about the correctness of Crypto++
> >> implementation.
> >>
> >
> > We've received another response from one of the Speck creators (Louis Wingers)
> > that (to summarize) the intended byte order is little endian, and the intended
> > word order is (y, x), i.e. 'y' is at a lower memory address than 'x'. Or
> > equivalently: the test vectors given in the original paper need to be read as
> > byte arrays from *right-to-left*.
> >
> > (y, x) is not the intuitive order, but it's not a huge deal. The more important
> > thing is that we don't end up with multiple implementations with different byte
> > and/or word orders.
> >
> > So, barring any additional confusion, I'll send a revised version of this
> > patchset that flips the word order. Jeff would need to flip both the byte and
> > word orders in his implementation in Crypto++ as well.
>
> Thanks Eric.
>
> Yeah, the (y,x) explains a lot of the confusion, and explains the
> modification I needed in my GitHub clone of the IAD Team's SUPERCOP to
> arrive at test vector results. My clone is available at
> https://github.com/noloader/simon-speck-supercop.
>
> So let me ask you... Given the Speck-128(128) test vector from Appendix C:
>
> Key: 0f0e0d0c0b0a0908 0706050403020100
> Plaintext: 6c61766975716520 7469206564616d20
> Ciphertext: a65d985179783265 7860fedf5c570d18
>
> Will the Linux implementation arrive at the published result, or will
> it arrive at a different result? I guess what I am asking, where is
> the presentation detail going to be handled?
>
> A related question is, will the kernel be parsing just the key as
> (y,x), or will all parameters be handled as (y,x)? At this point I
> believe it only needs to apply to the key but I did not investigate
> the word swapping in detail because I was chasing the test vector.
>

The kernel implementation has to operate on byte arrays. But the test vectors
in the original paper are given as *words* in the order (x, y) and likewise for
the key (i.e. the rightmost word shown becomes the first round key). But based
on the clarifications from the Speck team, the actual byte arrays that
correspond to the Speck-128/128 test vector would be:

const uint8_t key[16] = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f";
const uint8_t plaintext[16] = "\x20\x6d\x61\x64\x65\x20\x69\x74\x20\x65\x71\x75\x69\x76\x61\x6c";
const uint8_t ciphertext[16] = "\x18\x0d\x57\x5c\xdf\xfe\x60\x78\x65\x32\x78\x79\x51\x98\x5d\xa6";

So equivalently, if we consider the printed test vectors as just listing the
bytes (ignoring the whitespace between the words), then they are backwards.
That applies to all 3 parts (Key, Plaintext, and Ciphertext).

Note that my patch 1/5 adds the Speck test vectors to testmgr.h so that they are
hooked into the Linux kernel's crypto self-tests, so on appropriately-configured
kernels it will be automatically verified that the implementation matches the
test vectors. The ones in the current version of the patchset have the "wrong"
word order though, so I will need to send out a new version with the correct
implementation and test vectors.

Thanks,

Eric