2019-06-19 16:30:19

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

This series creates an ESSIV template that produces a skcipher or AEAD
transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
(or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
encapsulated sync or async skcipher/aead by passing through all operations,
while using the cipher/shash pair to transform the input IV into an ESSIV
output IV.

This matches what both users of ESSIV in the kernel do, and so it is proposed
as a replacement for those, in patches #2 and #4.

This code has been tested using the fscrypt test suggested by Eric
(generic/549), as well as the mode-test script suggested by Milan for
the dm-crypt case. I also tested the aead case in a virtual machine,
but it definitely needs some wider testing from the dm-crypt experts.

Changes since v2:
- fixed a couple of bugs that snuck in after I'd done the bulk of my
testing
- some cosmetic tweaks to the ESSIV template skcipher setkey function
to align it with the aead one
- add a test case for essiv(cbc(aes),aes,sha256)
- add an accelerated implementation for arm64 that combines the IV
derivation and the actual en/decryption in a single asm routine

Scroll down for tcrypt speed test result comparing the essiv template
with the asm implementation. Bare cbc(aes) tests included for reference
as well. Taken on a 2GHz Cortex-A57 (AMD Seattle)

Code can be found here
https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=essiv-v3

Cc: Herbert Xu <[email protected]>
Cc: Eric Biggers <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Gilad Ben-Yossef <[email protected]>
Cc: Milan Broz <[email protected]>

Ard Biesheuvel (6):
crypto: essiv - create wrapper template for ESSIV generation
fs: crypto: invoke crypto API for ESSIV handling
md: dm-crypt: infer ESSIV block cipher from cipher string directly
md: dm-crypt: switch to ESSIV crypto API template
crypto: essiv - add test vector for essiv(cbc(aes),aes,sha256)
crypto: arm64/aes - implement accelerated ESSIV/CBC mode

arch/arm64/crypto/aes-glue.c | 129 ++++
arch/arm64/crypto/aes-modes.S | 99 +++
crypto/Kconfig | 4 +
crypto/Makefile | 1 +
crypto/essiv.c | 630 ++++++++++++++++++++
crypto/tcrypt.c | 9 +
crypto/testmgr.c | 6 +
crypto/testmgr.h | 208 +++++++
drivers/md/Kconfig | 1 +
drivers/md/dm-crypt.c | 237 ++------
fs/crypto/Kconfig | 1 +
fs/crypto/crypto.c | 5 -
fs/crypto/fscrypt_private.h | 9 -
fs/crypto/keyinfo.c | 88 +--
14 files changed, 1132 insertions(+), 295 deletions(-)
create mode 100644 crypto/essiv.c

--
2.20.1

testing speed of async essiv(cbc(aes),aes,sha256) (essiv(cbc-aes-ce,aes-ce,sha256-ce)) encryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 3140785 ops/s ( 50252560 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 2672908 ops/s (171066112 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 1632811 ops/s (417999616 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 665980 ops/s (681963520 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 495180 ops/s (728904960 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 99329 ops/s (813703168 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 3106888 ops/s ( 49710208 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 2582682 ops/s (165291648 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 1511160 ops/s (386856960 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 589841 ops/s (603997184 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 435094 ops/s (640458368 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 82997 ops/s (679911424 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 3058592 ops/s ( 48937472 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 2496988 ops/s (159807232 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 1438355 ops/s (368218880 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 528902 ops/s (541595648 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 387861 ops/s (570931392 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 75444 ops/s (618037248 bytes)

testing speed of async essiv(cbc(aes),aes,sha256) (essiv(cbc-aes-ce,aes-ce,sha256-ce)) decryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 3164752 ops/s ( 50636032 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 2975874 ops/s ( 190455936 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 2393123 ops/s ( 612639488 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 1314745 ops/s (1346298880 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 1050717 ops/s (1546655424 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 246457 ops/s (2018975744 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 3117489 ops/s ( 49879824 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 2922089 ops/s ( 187013696 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 2292023 ops/s ( 586757888 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 1207942 ops/s (1236932608 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 955598 ops/s (1406640256 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 195198 ops/s (1599062016 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 3081935 ops/s ( 49310960 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 2883181 ops/s ( 184523584 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 2205147 ops/s ( 564517632 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 1119468 ops/s (1146335232 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 877017 ops/s (1290969024 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 195255 ops/s (1599528960 bytes)


testing speed of async essiv(cbc(aes),aes,sha256) (essiv-cbc-aes-sha256-ce) encryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5037539 ops/s ( 80600624 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 3884302 ops/s (248595328 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 2014999 ops/s (515839744 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 721147 ops/s (738454528 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 525262 ops/s (773185664 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 100453 ops/s (822910976 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 4972667 ops/s ( 79562672 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 3721788 ops/s (238194432 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 1835967 ops/s (470007552 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 633524 ops/s (648728576 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 458306 ops/s (674626432 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 83595 ops/s (684810240 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 4975101 ops/s ( 79601616 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 3581137 ops/s (229192768 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 1741799 ops/s (445900544 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 565340 ops/s (578908160 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 407040 ops/s (599162880 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 76092 ops/s (623345664 bytes)

testing speed of async essiv(cbc(aes),aes,sha256) (essiv-cbc-aes-sha256-ce) decryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5122947 ops/s ( 81967152 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 4546576 ops/s ( 290980864 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 3314744 ops/s ( 848574464 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 1550823 ops/s (1588042752 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 1197388 ops/s (1762555136 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 253661 ops/s (2077990912 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 5040644 ops/s ( 80650304 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 4442490 ops/s ( 284319360 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 3138199 ops/s ( 803378944 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 1406038 ops/s (1439782912 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 1075658 ops/s (1583368576 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 199652 ops/s (1635549184 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 4979432 ops/s ( 79670912 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 4394406 ops/s ( 281241984 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 2999511 ops/s ( 767874816 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 1294498 ops/s (1325565952 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 981009 ops/s (1444045248 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 200463 ops/s (1642192896 bytes)

testing speed of async cbc(aes) (cbc-aes-ce) encryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5895884 ops/s ( 94334144 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 4347437 ops/s (278235968 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 2135454 ops/s (546676224 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 736839 ops/s (754523136 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 533261 ops/s (784960192 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 100850 ops/s (826163200 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 5745691 ops/s ( 91931056 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 4113271 ops/s (263249344 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 1932208 ops/s (494645248 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 644555 ops/s (660024320 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 464237 ops/s (683356864 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 84019 ops/s (688283648 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 5620065 ops/s ( 89921040 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 3982991 ops/s (254911424 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 1830587 ops/s (468630272 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 576151 ops/s (589978624 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 412487 ops/s (607180864 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 76378 ops/s (625688576 bytes)

testing speed of async cbc(aes) (cbc-aes-ce) decryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5821314 ops/s ( 93141024 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 5248040 ops/s ( 335874560 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 3677701 ops/s ( 941491456 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 1650808 ops/s (1690427392 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 1256545 ops/s (1849634240 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 257922 ops/s (2112897024 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 5690108 ops/s ( 91041728 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 5086441 ops/s ( 325532224 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 3447562 ops/s ( 882575872 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 1490136 ops/s (1525899264 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 1124620 ops/s (1655440640 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 201222 ops/s (1648410624 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 5567247 ops/s ( 89075952 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 5050010 ops/s ( 323200640 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 3290422 ops/s ( 842348032 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 1359439 ops/s (1392065536 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 1017751 ops/s (1498129472 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 201492 ops/s (1650622464 bytes)


2019-06-19 16:30:24

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v3 3/6] md: dm-crypt: infer ESSIV block cipher from cipher string directly

Instead of allocating a crypto skcipher tfm 'foo' and attempting to
infer the encapsulated block cipher from the driver's 'name' field,
directly parse the string that we used to allocated the tfm. These
are always identical (unless the allocation failed, in which case
we bail anyway), but using the string allows us to use it in the
allocation, which is something we will need when switching to the
'essiv' crypto API template.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
drivers/md/dm-crypt.c | 35 +++++++++-----------
1 file changed, 15 insertions(+), 20 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 1b16d34bb785..f001f1104cb5 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -2321,25 +2321,17 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
* The cc->cipher is currently used only in ESSIV.
* This should be probably done by crypto-api calls (once available...)
*/
-static int crypt_ctr_blkdev_cipher(struct crypt_config *cc)
+static int crypt_ctr_blkdev_cipher(struct crypt_config *cc, char *alg_name)
{
- const char *alg_name = NULL;
char *start, *end;

if (crypt_integrity_aead(cc)) {
- alg_name = crypto_tfm_alg_name(crypto_aead_tfm(any_tfm_aead(cc)));
- if (!alg_name)
- return -EINVAL;
if (crypt_integrity_hmac(cc)) {
alg_name = strchr(alg_name, ',');
if (!alg_name)
return -EINVAL;
}
alg_name++;
- } else {
- alg_name = crypto_tfm_alg_name(crypto_skcipher_tfm(any_tfm(cc)));
- if (!alg_name)
- return -EINVAL;
}

start = strchr(alg_name, '(');
@@ -2434,6 +2426,20 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
if (*ivmode && !strcmp(*ivmode, "lmk"))
cc->tfms_count = 64;

+ if (crypt_integrity_aead(cc)) {
+ ret = crypt_ctr_auth_cipher(cc, cipher_api);
+ if (ret < 0) {
+ ti->error = "Invalid AEAD cipher spec";
+ return -ENOMEM;
+ }
+ }
+
+ ret = crypt_ctr_blkdev_cipher(cc, cipher_api);
+ if (ret < 0) {
+ ti->error = "Cannot allocate cipher string";
+ return -ENOMEM;
+ }
+
cc->key_parts = cc->tfms_count;

/* Allocate cipher */
@@ -2445,21 +2451,10 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key

/* Alloc AEAD, can be used only in new format. */
if (crypt_integrity_aead(cc)) {
- ret = crypt_ctr_auth_cipher(cc, cipher_api);
- if (ret < 0) {
- ti->error = "Invalid AEAD cipher spec";
- return -ENOMEM;
- }
cc->iv_size = crypto_aead_ivsize(any_tfm_aead(cc));
} else
cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc));

- ret = crypt_ctr_blkdev_cipher(cc);
- if (ret < 0) {
- ti->error = "Cannot allocate cipher string";
- return -ENOMEM;
- }
-
return 0;
}

--
2.20.1

2019-06-19 16:30:26

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

Implement a template that wraps a (skcipher,cipher,shash) or
(aead,cipher,shash) tuple so that we can consolidate the ESSIV handling
in fscrypt and dm-crypt and move it into the crypto API. This will result
in better test coverage, and will allow future changes to make the bare
cipher interface internal to the crypto subsystem, in order to increase
robustness of the API against misuse.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/Kconfig | 4 +
crypto/Makefile | 1 +
crypto/essiv.c | 630 ++++++++++++++++++++
3 files changed, 635 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 3d056e7da65f..1aa47087c1a2 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1917,6 +1917,10 @@ config CRYPTO_STATS
config CRYPTO_HASH_INFO
bool

+config CRYPTO_ESSIV
+ tristate
+ select CRYPTO_AUTHENC
+
source "drivers/crypto/Kconfig"
source "crypto/asymmetric_keys/Kconfig"
source "certs/Kconfig"
diff --git a/crypto/Makefile b/crypto/Makefile
index 266a4cdbb9e2..ad1d99ba6d56 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -148,6 +148,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o
obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o
obj-$(CONFIG_CRYPTO_OFB) += ofb.o
obj-$(CONFIG_CRYPTO_ECC) += ecc.o
+obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o

ecdh_generic-y += ecdh.o
ecdh_generic-y += ecdh_helper.o
diff --git a/crypto/essiv.c b/crypto/essiv.c
new file mode 100644
index 000000000000..45e9d10b8614
--- /dev/null
+++ b/crypto/essiv.c
@@ -0,0 +1,630 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ESSIV skcipher template for block encryption
+ *
+ * Copyright (c) 2019 Linaro, Ltd. <[email protected]>
+ *
+ * Heavily based on:
+ * adiantum length-preserving encryption mode
+ *
+ * Copyright 2018 Google LLC
+ */
+
+#include <crypto/authenc.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+
+#include "internal.h"
+
+#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo
+#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo
+
+struct essiv_instance_ctx {
+ union {
+ struct crypto_skcipher_spawn blockcipher_spawn;
+ struct crypto_aead_spawn aead_spawn;
+ } u;
+ struct crypto_spawn essiv_cipher_spawn;
+ struct crypto_shash_spawn hash_spawn;
+};
+
+struct essiv_tfm_ctx {
+ union {
+ struct crypto_skcipher *blockcipher;
+ struct crypto_aead *aead;
+ } u;
+ struct crypto_cipher *essiv_cipher;
+ struct crypto_shash *hash;
+};
+
+struct essiv_skcipher_request_ctx {
+ u8 iv[MAX_INNER_IV_SIZE];
+ struct skcipher_request blockcipher_req;
+};
+
+struct essiv_aead_request_ctx {
+ u8 iv[MAX_INNER_IV_SIZE];
+ struct scatterlist src[4], dst[4];
+ struct aead_request aead_req;
+};
+
+static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ SHASH_DESC_ON_STACK(desc, tctx->hash);
+ unsigned int saltsize;
+ u8 *salt;
+ int err;
+
+ crypto_skcipher_clear_flags(tctx->u.blockcipher, CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(tctx->u.blockcipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_skcipher_setkey(tctx->u.blockcipher, key, keylen);
+ crypto_skcipher_set_flags(tfm,
+ crypto_skcipher_get_flags(tctx->u.blockcipher) &
+ CRYPTO_TFM_RES_MASK);
+ if (err)
+ return err;
+
+ saltsize = crypto_shash_digestsize(tctx->hash);
+ salt = kmalloc(saltsize, GFP_KERNEL);
+ if (!salt)
+ return -ENOMEM;
+
+ desc->tfm = tctx->hash;
+ crypto_shash_digest(desc, key, keylen, salt);
+
+ crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(tctx->essiv_cipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(tctx->essiv_cipher, salt, saltsize);
+ crypto_skcipher_set_flags(tfm,
+ crypto_cipher_get_flags(tctx->essiv_cipher) &
+ CRYPTO_TFM_RES_MASK);
+
+ kzfree(salt);
+ return err;
+}
+
+static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+ SHASH_DESC_ON_STACK(desc, tctx->hash);
+ struct crypto_authenc_keys keys;
+ unsigned int saltsize;
+ u8 *salt;
+ int err;
+
+ crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK);
+ crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_aead_setkey(tctx->u.aead, key, keylen);
+ crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) &
+ CRYPTO_TFM_RES_MASK);
+ if (err)
+ return err;
+
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) {
+ crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ return -EINVAL;
+ }
+
+ saltsize = crypto_shash_digestsize(tctx->hash);
+ salt = kmalloc(saltsize, GFP_KERNEL);
+ if (!salt)
+ return -ENOMEM;
+
+ desc->tfm = tctx->hash;
+ crypto_shash_init(desc);
+ crypto_shash_update(desc, keys.enckey, keys.enckeylen);
+ crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt);
+
+ crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(tctx->essiv_cipher, salt, saltsize);
+ crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) &
+ CRYPTO_TFM_RES_MASK);
+
+ kzfree(salt);
+ return err;
+}
+
+static int essiv_aead_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+
+ return crypto_aead_setauthsize(tctx->u.aead, authsize);
+}
+
+static void essiv_skcipher_done(struct crypto_async_request *areq, int err)
+{
+ struct skcipher_request *req = areq->data;
+
+ skcipher_request_complete(req, err);
+}
+
+static void essiv_aead_done(struct crypto_async_request *areq, int err)
+{
+ struct aead_request *req = areq->data;
+
+ aead_request_complete(req, err);
+}
+
+static void essiv_skcipher_prepare_subreq(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+ struct skcipher_request *subreq = &rctx->blockcipher_req;
+
+ memset(rctx->iv, 0, crypto_cipher_blocksize(tctx->essiv_cipher));
+ memcpy(rctx->iv, req->iv, crypto_skcipher_ivsize(tfm));
+
+ crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv);
+
+ skcipher_request_set_tfm(subreq, tctx->u.blockcipher);
+ skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+ rctx->iv);
+ skcipher_request_set_callback(subreq, req->base.flags,
+ essiv_skcipher_done, req);
+}
+
+static int essiv_aead_prepare_subreq(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+ int ivsize = crypto_cipher_blocksize(tctx->essiv_cipher);
+ int ssize = req->assoclen - crypto_aead_ivsize(tfm);
+ struct aead_request *subreq = &rctx->aead_req;
+ struct scatterlist *sg;
+
+ /*
+ * dm-crypt embeds the sector number and the IV in the AAD region so we
+ * have to splice the converted IV into the subrequest that we pass on
+ * to the AEAD transform. This means we are tightly coupled to dm-crypt,
+ * but that should be the only user of this code in AEAD mode.
+ */
+ if (ssize < 0 || sg_nents_for_len(req->src, ssize) != 1)
+ return -EINVAL;
+
+ memset(rctx->iv, 0, ivsize);
+ memcpy(rctx->iv, req->iv, crypto_aead_ivsize(tfm));
+
+ crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv);
+
+ sg_init_table(rctx->src, 4);
+ sg_set_page(rctx->src, sg_page(req->src), ssize, req->src->offset);
+ sg_set_buf(rctx->src + 1, rctx->iv, ivsize);
+ sg = scatterwalk_ffwd(rctx->src + 2, req->src, req->assoclen);
+ if (sg != rctx->src + 2)
+ sg_chain(rctx->src, 3, sg);
+
+ sg_init_table(rctx->dst, 4);
+ sg_set_page(rctx->dst, sg_page(req->dst), ssize, req->dst->offset);
+ sg_set_buf(rctx->dst + 1, rctx->iv, ivsize);
+ sg = scatterwalk_ffwd(rctx->dst + 2, req->dst, req->assoclen);
+ if (sg != rctx->dst + 2)
+ sg_chain(rctx->dst, 3, sg);
+
+ aead_request_set_tfm(subreq, tctx->u.aead);
+ aead_request_set_crypt(subreq, rctx->src, rctx->dst, req->cryptlen,
+ rctx->iv);
+ aead_request_set_ad(subreq, ssize + ivsize);
+ aead_request_set_callback(subreq, req->base.flags, essiv_aead_done, req);
+
+ return 0;
+}
+
+static int essiv_skcipher_encrypt(struct skcipher_request *req)
+{
+ struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+
+ essiv_skcipher_prepare_subreq(req);
+ return crypto_skcipher_encrypt(&rctx->blockcipher_req);
+}
+
+static int essiv_aead_encrypt(struct aead_request *req)
+{
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+ int err;
+
+ err = essiv_aead_prepare_subreq(req);
+ if (err)
+ return err;
+ return crypto_aead_encrypt(&rctx->aead_req);
+}
+
+static int essiv_skcipher_decrypt(struct skcipher_request *req)
+{
+ struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+
+ essiv_skcipher_prepare_subreq(req);
+ return crypto_skcipher_decrypt(&rctx->blockcipher_req);
+}
+
+static int essiv_aead_decrypt(struct aead_request *req)
+{
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+ int err;
+
+ err = essiv_aead_prepare_subreq(req);
+ if (err)
+ return err;
+
+ essiv_aead_prepare_subreq(req);
+ return crypto_aead_decrypt(&rctx->aead_req);
+}
+
+static int essiv_init_tfm(struct essiv_instance_ctx *ictx,
+ struct essiv_tfm_ctx *tctx)
+{
+ struct crypto_cipher *essiv_cipher;
+ struct crypto_shash *hash;
+ int err;
+
+ essiv_cipher = crypto_spawn_cipher(&ictx->essiv_cipher_spawn);
+ if (IS_ERR(essiv_cipher))
+ return PTR_ERR(essiv_cipher);
+
+ hash = crypto_spawn_shash(&ictx->hash_spawn);
+ if (IS_ERR(hash)) {
+ err = PTR_ERR(hash);
+ goto err_free_essiv_cipher;
+ }
+
+ tctx->essiv_cipher = essiv_cipher;
+ tctx->hash = hash;
+
+ return 0;
+
+err_free_essiv_cipher:
+ crypto_free_cipher(essiv_cipher);
+ return err;
+}
+
+static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+ struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+ struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct crypto_skcipher *blockcipher;
+ unsigned int subreq_size;
+ int err;
+
+ BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx,
+ blockcipher_req) !=
+ sizeof(struct essiv_skcipher_request_ctx));
+
+ blockcipher = crypto_spawn_skcipher(&ictx->u.blockcipher_spawn);
+ if (IS_ERR(blockcipher))
+ return PTR_ERR(blockcipher);
+
+ subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx,
+ blockcipher_req) +
+ crypto_skcipher_reqsize(blockcipher);
+
+ crypto_skcipher_set_reqsize(tfm, offsetof(struct essiv_skcipher_request_ctx,
+ blockcipher_req) + subreq_size);
+
+ err = essiv_init_tfm(ictx, tctx);
+ if (err)
+ crypto_free_skcipher(blockcipher);
+
+ tctx->u.blockcipher = blockcipher;
+ return err;
+}
+
+static int essiv_aead_init_tfm(struct crypto_aead *tfm)
+{
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+ struct crypto_aead *aead;
+ unsigned int subreq_size;
+ int err;
+
+ BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) !=
+ sizeof(struct essiv_aead_request_ctx));
+
+ aead = crypto_spawn_aead(&ictx->u.aead_spawn);
+ if (IS_ERR(aead))
+ return PTR_ERR(aead);
+
+ subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) +
+ crypto_aead_reqsize(aead);
+
+ crypto_aead_set_reqsize(tfm, offsetof(struct essiv_aead_request_ctx,
+ aead_req) + subreq_size);
+
+ err = essiv_init_tfm(ictx, tctx);
+ if (err)
+ crypto_free_aead(aead);
+
+ tctx->u.aead = aead;
+ return err;
+}
+
+static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_skcipher(tctx->u.blockcipher);
+ crypto_free_cipher(tctx->essiv_cipher);
+ crypto_free_shash(tctx->hash);
+}
+
+static void essiv_aead_exit_tfm(struct crypto_aead *tfm)
+{
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+
+ crypto_free_aead(tctx->u.aead);
+ crypto_free_cipher(tctx->essiv_cipher);
+ crypto_free_shash(tctx->hash);
+}
+
+static void essiv_skcipher_free_instance(struct skcipher_instance *inst)
+{
+ struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+
+ crypto_drop_skcipher(&ictx->u.blockcipher_spawn);
+ crypto_drop_spawn(&ictx->essiv_cipher_spawn);
+ crypto_drop_shash(&ictx->hash_spawn);
+ kfree(inst);
+}
+
+static void essiv_aead_free_instance(struct aead_instance *inst)
+{
+ struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
+
+ crypto_drop_aead(&ictx->u.aead_spawn);
+ crypto_drop_spawn(&ictx->essiv_cipher_spawn);
+ crypto_drop_shash(&ictx->hash_spawn);
+ kfree(inst);
+}
+
+static bool essiv_supported_algorithms(struct crypto_alg *essiv_cipher_alg,
+ struct shash_alg *hash_alg,
+ int ivsize)
+{
+ if (hash_alg->digestsize < essiv_cipher_alg->cra_cipher.cia_min_keysize ||
+ hash_alg->digestsize > essiv_cipher_alg->cra_cipher.cia_max_keysize)
+ return false;
+
+ if (ivsize != essiv_cipher_alg->cra_blocksize)
+ return false;
+
+ if (ivsize > MAX_INNER_IV_SIZE)
+ return false;
+
+ return true;
+}
+
+static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ struct crypto_attr_type *algt;
+ const char *blockcipher_name;
+ const char *essiv_cipher_name;
+ const char *shash_name;
+ struct skcipher_instance *skcipher_inst = NULL;
+ struct aead_instance *aead_inst = NULL;
+ struct crypto_instance *inst;
+ struct crypto_alg *base, *block_base;
+ struct essiv_instance_ctx *ictx;
+ struct skcipher_alg *blockcipher_alg = NULL;
+ struct aead_alg *aead_alg = NULL;
+ struct crypto_alg *essiv_cipher_alg;
+ struct crypto_alg *_hash_alg;
+ struct shash_alg *hash_alg;
+ int ivsize;
+ u32 type;
+ int err;
+
+ algt = crypto_get_attr_type(tb);
+ if (IS_ERR(algt))
+ return PTR_ERR(algt);
+
+ blockcipher_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(blockcipher_name))
+ return PTR_ERR(blockcipher_name);
+
+ essiv_cipher_name = crypto_attr_alg_name(tb[2]);
+ if (IS_ERR(essiv_cipher_name))
+ return PTR_ERR(essiv_cipher_name);
+
+ shash_name = crypto_attr_alg_name(tb[3]);
+ if (IS_ERR(shash_name))
+ return PTR_ERR(shash_name);
+
+ type = algt->type & algt->mask;
+
+ switch (type) {
+ case CRYPTO_ALG_TYPE_BLKCIPHER:
+ skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
+ sizeof(*ictx), GFP_KERNEL);
+ if (!skcipher_inst)
+ return -ENOMEM;
+ inst = skcipher_crypto_instance(skcipher_inst);
+ base = &skcipher_inst->alg.base;
+ ictx = crypto_instance_ctx(inst);
+
+ /* Block cipher, e.g. "cbc(aes)" */
+ crypto_set_skcipher_spawn(&ictx->u.blockcipher_spawn, inst);
+ err = crypto_grab_skcipher(&ictx->u.blockcipher_spawn,
+ blockcipher_name, 0,
+ crypto_requires_sync(algt->type,
+ algt->mask));
+ if (err)
+ goto out_free_inst;
+ blockcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.blockcipher_spawn);
+ block_base = &blockcipher_alg->base;
+ ivsize = blockcipher_alg->ivsize;
+ break;
+
+ case CRYPTO_ALG_TYPE_AEAD:
+ aead_inst = kzalloc(sizeof(*aead_inst) +
+ sizeof(*ictx), GFP_KERNEL);
+ if (!aead_inst)
+ return -ENOMEM;
+ inst = aead_crypto_instance(aead_inst);
+ base = &aead_inst->alg.base;
+ ictx = crypto_instance_ctx(inst);
+
+ /* AEAD cipher, e.g. "authenc(hmac(sha256),cbc(aes))" */
+ crypto_set_aead_spawn(&ictx->u.aead_spawn, inst);
+ err = crypto_grab_aead(&ictx->u.aead_spawn,
+ blockcipher_name, 0,
+ crypto_requires_sync(algt->type,
+ algt->mask));
+ if (err)
+ goto out_free_inst;
+ aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn);
+ block_base = &aead_alg->base;
+ ivsize = aead_alg->ivsize;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ /* Block cipher, e.g. "aes" */
+ crypto_set_spawn(&ictx->essiv_cipher_spawn, inst);
+ err = crypto_grab_spawn(&ictx->essiv_cipher_spawn, essiv_cipher_name,
+ CRYPTO_ALG_TYPE_CIPHER, CRYPTO_ALG_TYPE_MASK);
+ if (err)
+ goto out_drop_blockcipher;
+ essiv_cipher_alg = ictx->essiv_cipher_spawn.alg;
+
+ /* Synchronous hash, e.g., "sha256" */
+ _hash_alg = crypto_alg_mod_lookup(shash_name,
+ CRYPTO_ALG_TYPE_SHASH,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(_hash_alg)) {
+ err = PTR_ERR(_hash_alg);
+ goto out_drop_essiv_cipher;
+ }
+ hash_alg = __crypto_shash_alg(_hash_alg);
+ err = crypto_init_shash_spawn(&ictx->hash_spawn, hash_alg, inst);
+ if (err)
+ goto out_put_hash;
+
+ /* Check the set of algorithms */
+ if (!essiv_supported_algorithms(essiv_cipher_alg, hash_alg, ivsize)) {
+ pr_warn("Unsupported essiv instantiation: (%s,%s,%s)\n",
+ block_base->cra_name,
+ essiv_cipher_alg->cra_name,
+ hash_alg->base.cra_name);
+ err = -EINVAL;
+ goto out_drop_hash;
+ }
+
+ /* Instance fields */
+
+ err = -ENAMETOOLONG;
+ if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME,
+ "essiv(%s,%s,%s)", block_base->cra_name,
+ essiv_cipher_alg->cra_name,
+ hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
+ goto out_drop_hash;
+ if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "essiv(%s,%s,%s)",
+ block_base->cra_driver_name,
+ essiv_cipher_alg->cra_driver_name,
+ hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ goto out_drop_hash;
+
+ base->cra_flags = block_base->cra_flags & CRYPTO_ALG_ASYNC;
+ base->cra_blocksize = block_base->cra_blocksize;
+ base->cra_ctxsize = sizeof(struct essiv_tfm_ctx);
+ base->cra_alignmask = block_base->cra_alignmask;
+ base->cra_priority = block_base->cra_priority;
+
+ if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+ skcipher_inst->alg.setkey = essiv_skcipher_setkey;
+ skcipher_inst->alg.encrypt = essiv_skcipher_encrypt;
+ skcipher_inst->alg.decrypt = essiv_skcipher_decrypt;
+ skcipher_inst->alg.init = essiv_skcipher_init_tfm;
+ skcipher_inst->alg.exit = essiv_skcipher_exit_tfm;
+
+ skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(blockcipher_alg);
+ skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(blockcipher_alg);
+ skcipher_inst->alg.ivsize = ESSIV_IV_SIZE;
+ skcipher_inst->alg.chunksize = blockcipher_alg->chunksize;
+ skcipher_inst->alg.walksize = blockcipher_alg->walksize;
+
+ skcipher_inst->free = essiv_skcipher_free_instance;
+
+ err = skcipher_register_instance(tmpl, skcipher_inst);
+ } else {
+ aead_inst->alg.setkey = essiv_aead_setkey;
+ aead_inst->alg.setauthsize = essiv_aead_setauthsize;
+ aead_inst->alg.encrypt = essiv_aead_encrypt;
+ aead_inst->alg.decrypt = essiv_aead_decrypt;
+ aead_inst->alg.init = essiv_aead_init_tfm;
+ aead_inst->alg.exit = essiv_aead_exit_tfm;
+
+ aead_inst->alg.ivsize = ESSIV_IV_SIZE;
+ aead_inst->alg.maxauthsize = aead_alg->maxauthsize;
+ aead_inst->alg.chunksize = aead_alg->chunksize;
+
+ aead_inst->free = essiv_aead_free_instance;
+
+ err = aead_register_instance(tmpl, aead_inst);
+ }
+
+ if (err)
+ goto out_drop_hash;
+
+ crypto_mod_put(_hash_alg);
+ return 0;
+
+out_drop_hash:
+ crypto_drop_shash(&ictx->hash_spawn);
+out_put_hash:
+ crypto_mod_put(_hash_alg);
+out_drop_essiv_cipher:
+ crypto_drop_spawn(&ictx->essiv_cipher_spawn);
+out_drop_blockcipher:
+ if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+ crypto_drop_skcipher(&ictx->u.blockcipher_spawn);
+ } else {
+ crypto_drop_aead(&ictx->u.aead_spawn);
+ }
+out_free_inst:
+ kfree(skcipher_inst);
+ kfree(aead_inst);
+ return err;
+}
+
+/* essiv(blockcipher_name, essiv_cipher_name, shash_name) */
+static struct crypto_template essiv_tmpl = {
+ .name = "essiv",
+ .create = essiv_create,
+ .module = THIS_MODULE,
+};
+
+static int __init essiv_module_init(void)
+{
+ return crypto_register_template(&essiv_tmpl);
+}
+
+static void __exit essiv_module_exit(void)
+{
+ crypto_unregister_template(&essiv_tmpl);
+}
+
+subsys_initcall(essiv_module_init);
+module_exit(essiv_module_exit);
+
+MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("essiv");
--
2.20.1

2019-06-19 16:30:28

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v3 4/6] md: dm-crypt: switch to ESSIV crypto API template

Replace the explicit ESSIV handling in the dm-crypt driver with calls
into the crypto API, which now possesses the capability to perform
this processing within the crypto subsystem.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
drivers/md/Kconfig | 1 +
drivers/md/dm-crypt.c | 208 +++-----------------
2 files changed, 31 insertions(+), 178 deletions(-)

diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 45254b3ef715..30ca87cf25db 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -271,6 +271,7 @@ config DM_CRYPT
depends on BLK_DEV_DM
select CRYPTO
select CRYPTO_CBC
+ select CRYPTO_ESSIV
---help---
This device-mapper target allows you to create a device that
transparently encrypts the data on it. You'll need to activate
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index f001f1104cb5..12d28880ec34 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -98,11 +98,6 @@ struct crypt_iv_operations {
struct dm_crypt_request *dmreq);
};

-struct iv_essiv_private {
- struct crypto_shash *hash_tfm;
- u8 *salt;
-};
-
struct iv_benbi_private {
int shift;
};
@@ -155,7 +150,6 @@ struct crypt_config {

const struct crypt_iv_operations *iv_gen_ops;
union {
- struct iv_essiv_private essiv;
struct iv_benbi_private benbi;
struct iv_lmk_private lmk;
struct iv_tcw_private tcw;
@@ -165,8 +159,6 @@ struct crypt_config {
unsigned short int sector_size;
unsigned char sector_shift;

- /* ESSIV: struct crypto_cipher *essiv_tfm */
- void *iv_private;
union {
struct crypto_skcipher **tfms;
struct crypto_aead **tfms_aead;
@@ -323,161 +315,6 @@ static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv,
return 0;
}

-/* Initialise ESSIV - compute salt but no local memory allocations */
-static int crypt_iv_essiv_init(struct crypt_config *cc)
-{
- struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
- SHASH_DESC_ON_STACK(desc, essiv->hash_tfm);
- struct crypto_cipher *essiv_tfm;
- int err;
-
- desc->tfm = essiv->hash_tfm;
-
- err = crypto_shash_digest(desc, cc->key, cc->key_size, essiv->salt);
- shash_desc_zero(desc);
- if (err)
- return err;
-
- essiv_tfm = cc->iv_private;
-
- err = crypto_cipher_setkey(essiv_tfm, essiv->salt,
- crypto_shash_digestsize(essiv->hash_tfm));
- if (err)
- return err;
-
- return 0;
-}
-
-/* Wipe salt and reset key derived from volume key */
-static int crypt_iv_essiv_wipe(struct crypt_config *cc)
-{
- struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
- unsigned salt_size = crypto_shash_digestsize(essiv->hash_tfm);
- struct crypto_cipher *essiv_tfm;
- int r, err = 0;
-
- memset(essiv->salt, 0, salt_size);
-
- essiv_tfm = cc->iv_private;
- r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
- if (r)
- err = r;
-
- return err;
-}
-
-/* Allocate the cipher for ESSIV */
-static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc,
- struct dm_target *ti,
- const u8 *salt,
- unsigned int saltsize)
-{
- struct crypto_cipher *essiv_tfm;
- int err;
-
- /* Setup the essiv_tfm with the given salt */
- essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, 0);
- if (IS_ERR(essiv_tfm)) {
- ti->error = "Error allocating crypto tfm for ESSIV";
- return essiv_tfm;
- }
-
- if (crypto_cipher_blocksize(essiv_tfm) != cc->iv_size) {
- ti->error = "Block size of ESSIV cipher does "
- "not match IV size of block cipher";
- crypto_free_cipher(essiv_tfm);
- return ERR_PTR(-EINVAL);
- }
-
- err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
- if (err) {
- ti->error = "Failed to set key for ESSIV cipher";
- crypto_free_cipher(essiv_tfm);
- return ERR_PTR(err);
- }
-
- return essiv_tfm;
-}
-
-static void crypt_iv_essiv_dtr(struct crypt_config *cc)
-{
- struct crypto_cipher *essiv_tfm;
- struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-
- crypto_free_shash(essiv->hash_tfm);
- essiv->hash_tfm = NULL;
-
- kzfree(essiv->salt);
- essiv->salt = NULL;
-
- essiv_tfm = cc->iv_private;
-
- if (essiv_tfm)
- crypto_free_cipher(essiv_tfm);
-
- cc->iv_private = NULL;
-}
-
-static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
- const char *opts)
-{
- struct crypto_cipher *essiv_tfm = NULL;
- struct crypto_shash *hash_tfm = NULL;
- u8 *salt = NULL;
- int err;
-
- if (!opts) {
- ti->error = "Digest algorithm missing for ESSIV mode";
- return -EINVAL;
- }
-
- /* Allocate hash algorithm */
- hash_tfm = crypto_alloc_shash(opts, 0, 0);
- if (IS_ERR(hash_tfm)) {
- ti->error = "Error initializing ESSIV hash";
- err = PTR_ERR(hash_tfm);
- goto bad;
- }
-
- salt = kzalloc(crypto_shash_digestsize(hash_tfm), GFP_KERNEL);
- if (!salt) {
- ti->error = "Error kmallocing salt storage in ESSIV";
- err = -ENOMEM;
- goto bad;
- }
-
- cc->iv_gen_private.essiv.salt = salt;
- cc->iv_gen_private.essiv.hash_tfm = hash_tfm;
-
- essiv_tfm = alloc_essiv_cipher(cc, ti, salt,
- crypto_shash_digestsize(hash_tfm));
- if (IS_ERR(essiv_tfm)) {
- crypt_iv_essiv_dtr(cc);
- return PTR_ERR(essiv_tfm);
- }
- cc->iv_private = essiv_tfm;
-
- return 0;
-
-bad:
- if (hash_tfm && !IS_ERR(hash_tfm))
- crypto_free_shash(hash_tfm);
- kfree(salt);
- return err;
-}
-
-static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv,
- struct dm_crypt_request *dmreq)
-{
- struct crypto_cipher *essiv_tfm = cc->iv_private;
-
- memset(iv, 0, cc->iv_size);
- *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
- crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
-
- return 0;
-}
-
static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti,
const char *opts)
{
@@ -853,14 +690,6 @@ static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
.generator = crypt_iv_plain64be_gen
};

-static const struct crypt_iv_operations crypt_iv_essiv_ops = {
- .ctr = crypt_iv_essiv_ctr,
- .dtr = crypt_iv_essiv_dtr,
- .init = crypt_iv_essiv_init,
- .wipe = crypt_iv_essiv_wipe,
- .generator = crypt_iv_essiv_gen
-};
-
static const struct crypt_iv_operations crypt_iv_benbi_ops = {
.ctr = crypt_iv_benbi_ctr,
.dtr = crypt_iv_benbi_dtr,
@@ -2283,7 +2112,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
else if (strcmp(ivmode, "plain64be") == 0)
cc->iv_gen_ops = &crypt_iv_plain64be_ops;
else if (strcmp(ivmode, "essiv") == 0)
- cc->iv_gen_ops = &crypt_iv_essiv_ops;
+ cc->iv_gen_ops = &crypt_iv_plain64_ops;
else if (strcmp(ivmode, "benbi") == 0)
cc->iv_gen_ops = &crypt_iv_benbi_ops;
else if (strcmp(ivmode, "null") == 0)
@@ -2397,7 +2226,7 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
char **ivmode, char **ivopts)
{
struct crypt_config *cc = ti->private;
- char *tmp, *cipher_api;
+ char *tmp, *cipher_api, buf[CRYPTO_MAX_ALG_NAME];
int ret = -EINVAL;

cc->tfms_count = 1;
@@ -2435,9 +2264,19 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
}

ret = crypt_ctr_blkdev_cipher(cc, cipher_api);
- if (ret < 0) {
- ti->error = "Cannot allocate cipher string";
- return -ENOMEM;
+ if (ret < 0)
+ goto bad_mem;
+
+ if (*ivmode && !strcmp(*ivmode, "essiv")) {
+ if (!*ivopts) {
+ ti->error = "Digest algorithm missing for ESSIV mode";
+ return -EINVAL;
+ }
+ ret = snprintf(buf, CRYPTO_MAX_ALG_NAME, "essiv(%s,%s,%s)",
+ cipher_api, cc->cipher, *ivopts);
+ if (ret < 0)
+ goto bad_mem;
+ cipher_api = buf;
}

cc->key_parts = cc->tfms_count;
@@ -2456,6 +2295,9 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc));

return 0;
+bad_mem:
+ ti->error = "Cannot allocate cipher string";
+ return -ENOMEM;
}

static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key,
@@ -2515,8 +2357,18 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
if (!cipher_api)
goto bad_mem;

- ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
- "%s(%s)", chainmode, cipher);
+ if (*ivmode && !strcmp(*ivmode, "essiv")) {
+ if (!*ivopts) {
+ ti->error = "Digest algorithm missing for ESSIV mode";
+ return -EINVAL;
+ }
+ ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+ "essiv(%s(%s),%s,%s)", chainmode, cipher,
+ cipher, *ivopts);
+ } else {
+ ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+ "%s(%s)", chainmode, cipher);
+ }
if (ret < 0) {
kfree(cipher_api);
goto bad_mem;
--
2.20.1

2019-06-19 16:30:32

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v3 5/6] crypto: essiv - add test vector for essiv(cbc(aes),aes,sha256)

Add a test vector for the ESSIV mode that is the most widely used,
i.e., using cbc(aes) and sha256.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/tcrypt.c | 9 +
crypto/testmgr.c | 6 +
crypto/testmgr.h | 208 ++++++++++++++++++++
3 files changed, 223 insertions(+)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index ad78ab5b93cb..f990a209197e 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -2327,6 +2327,15 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
0, speed_template_32);
break;

+ case 220:
+ test_acipher_speed("essiv(cbc(aes),aes,sha256)",
+ ENCRYPT, sec, NULL, 0,
+ speed_template_16_24_32);
+ test_acipher_speed("essiv(cbc(aes),aes,sha256)",
+ DECRYPT, sec, NULL, 0,
+ speed_template_16_24_32);
+ break;
+
case 300:
if (alg) {
test_hash_speed(alg, sec, generic_hash_speed_template);
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 658a7eeebab2..23703f3e9cbb 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4253,6 +4253,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.akcipher = __VECS(ecrdsa_tv_template)
}
+ }, {
+ .alg = "essiv(cbc(aes),aes,sha256)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = __VECS(essiv_aes_cbc_tv_template)
+ }
}, {
.alg = "gcm(aes)",
.generic_driver = "gcm_base(ctr(aes-generic),ghash-generic)",
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 1fdae5993bc3..e515e74d6a40 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -33575,4 +33575,212 @@ static const struct comp_testvec zstd_decomp_tv_template[] = {
"functions.",
},
};
+
+/* based on aes_cbc_tv_template */
+static const struct cipher_testvec essiv_aes_cbc_tv_template[] = {
+ {
+ .key = "\x06\xa9\x21\x40\x36\xb8\xa1\x5b"
+ "\x51\x2e\x03\xd5\x34\x12\x00\x06",
+ .klen = 16,
+ .iv = "\x3d\xaf\xba\x42\x9d\x9e\xb4\x30",
+ .ptext = "Single block msg",
+ .ctext = "\xfa\x59\xe7\x5f\x41\x56\x65\xc3"
+ "\x36\xca\x6b\x72\x10\x9f\x8c\xd4",
+ .len = 16,
+ }, {
+ .key = "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0"
+ "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a",
+ .klen = 16,
+ .iv = "\x56\x2e\x17\x99\x6d\x09\x3d\x28",
+ .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
+ .ctext = "\xc8\x59\x9a\xfe\x79\xe6\x7b\x20"
+ "\x06\x7d\x55\x0a\x5e\xc7\xb5\xa7"
+ "\x0b\x9c\x80\xd2\x15\xa1\xb8\x6d"
+ "\xc6\xab\x7b\x65\xd9\xfd\x88\xeb",
+ .len = 32,
+ }, {
+ .key = "\x8e\x73\xb0\xf7\xda\x0e\x64\x52"
+ "\xc8\x10\xf3\x2b\x80\x90\x79\xe5"
+ "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b",
+ .klen = 24,
+ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+ "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+ "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+ "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+ "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+ "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+ "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+ .ctext = "\x96\x6d\xa9\x7a\x42\xe6\x01\xc7"
+ "\x17\xfc\xa7\x41\xd3\x38\x0b\xe5"
+ "\x51\x48\xf7\x7e\x5e\x26\xa9\xfe"
+ "\x45\x72\x1c\xd9\xde\xab\xf3\x4d"
+ "\x39\x47\xc5\x4f\x97\x3a\x55\x63"
+ "\x80\x29\x64\x4c\x33\xe8\x21\x8a"
+ "\x6a\xef\x6b\x6a\x8f\x43\xc0\xcb"
+ "\xf0\xf3\x6e\x74\x54\x44\x92\x44",
+ .len = 64,
+ }, {
+ .key = "\x60\x3d\xeb\x10\x15\xca\x71\xbe"
+ "\x2b\x73\xae\xf0\x85\x7d\x77\x81"
+ "\x1f\x35\x2c\x07\x3b\x61\x08\xd7"
+ "\x2d\x98\x10\xa3\x09\x14\xdf\xf4",
+ .klen = 32,
+ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+ "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+ "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+ "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+ "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+ "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+ "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+ .ctext = "\x24\x52\xf1\x48\x74\xd0\xa7\x93"
+ "\x75\x9b\x63\x46\xc0\x1c\x1e\x17"
+ "\x4d\xdc\x5b\x3a\x27\x93\x2a\x63"
+ "\xf7\xf1\xc7\xb3\x54\x56\x5b\x50"
+ "\xa3\x31\xa5\x8b\xd6\xfd\xb6\x3c"
+ "\x8b\xf6\xf2\x45\x05\x0c\xc8\xbb"
+ "\x32\x0b\x26\x1c\xe9\x8b\x02\xc0"
+ "\xb2\x6f\x37\xa7\x5b\xa8\xa9\x42",
+ .len = 64,
+ }, {
+ .key = "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55"
+ "\x0F\x32\x55\x78\x9B\xBE\x78\x9B"
+ "\xBE\xE1\x04\x27\xE1\x04\x27\x4A"
+ "\x6D\x90\x4A\x6D\x90\xB3\xD6\xF9",
+ .klen = 32,
+ .iv = "\xE7\x82\x1D\xB8\x53\x11\xAC\x47",
+ .ptext = "\x50\xB9\x22\xAE\x17\x80\x0C\x75"
+ "\xDE\x47\xD3\x3C\xA5\x0E\x9A\x03"
+ "\x6C\xF8\x61\xCA\x33\xBF\x28\x91"
+ "\x1D\x86\xEF\x58\xE4\x4D\xB6\x1F"
+ "\xAB\x14\x7D\x09\x72\xDB\x44\xD0"
+ "\x39\xA2\x0B\x97\x00\x69\xF5\x5E"
+ "\xC7\x30\xBC\x25\x8E\x1A\x83\xEC"
+ "\x55\xE1\x4A\xB3\x1C\xA8\x11\x7A"
+ "\x06\x6F\xD8\x41\xCD\x36\x9F\x08"
+ "\x94\xFD\x66\xF2\x5B\xC4\x2D\xB9"
+ "\x22\x8B\x17\x80\xE9\x52\xDE\x47"
+ "\xB0\x19\xA5\x0E\x77\x03\x6C\xD5"
+ "\x3E\xCA\x33\x9C\x05\x91\xFA\x63"
+ "\xEF\x58\xC1\x2A\xB6\x1F\x88\x14"
+ "\x7D\xE6\x4F\xDB\x44\xAD\x16\xA2"
+ "\x0B\x74\x00\x69\xD2\x3B\xC7\x30"
+ "\x99\x02\x8E\xF7\x60\xEC\x55\xBE"
+ "\x27\xB3\x1C\x85\x11\x7A\xE3\x4C"
+ "\xD8\x41\xAA\x13\x9F\x08\x71\xFD"
+ "\x66\xCF\x38\xC4\x2D\x96\x22\x8B"
+ "\xF4\x5D\xE9\x52\xBB\x24\xB0\x19"
+ "\x82\x0E\x77\xE0\x49\xD5\x3E\xA7"
+ "\x10\x9C\x05\x6E\xFA\x63\xCC\x35"
+ "\xC1\x2A\x93\x1F\x88\xF1\x5A\xE6"
+ "\x4F\xB8\x21\xAD\x16\x7F\x0B\x74"
+ "\xDD\x46\xD2\x3B\xA4\x0D\x99\x02"
+ "\x6B\xF7\x60\xC9\x32\xBE\x27\x90"
+ "\x1C\x85\xEE\x57\xE3\x4C\xB5\x1E"
+ "\xAA\x13\x7C\x08\x71\xDA\x43\xCF"
+ "\x38\xA1\x0A\x96\xFF\x68\xF4\x5D"
+ "\xC6\x2F\xBB\x24\x8D\x19\x82\xEB"
+ "\x54\xE0\x49\xB2\x1B\xA7\x10\x79"
+ "\x05\x6E\xD7\x40\xCC\x35\x9E\x07"
+ "\x93\xFC\x65\xF1\x5A\xC3\x2C\xB8"
+ "\x21\x8A\x16\x7F\xE8\x51\xDD\x46"
+ "\xAF\x18\xA4\x0D\x76\x02\x6B\xD4"
+ "\x3D\xC9\x32\x9B\x04\x90\xF9\x62"
+ "\xEE\x57\xC0\x29\xB5\x1E\x87\x13"
+ "\x7C\xE5\x4E\xDA\x43\xAC\x15\xA1"
+ "\x0A\x73\xFF\x68\xD1\x3A\xC6\x2F"
+ "\x98\x01\x8D\xF6\x5F\xEB\x54\xBD"
+ "\x26\xB2\x1B\x84\x10\x79\xE2\x4B"
+ "\xD7\x40\xA9\x12\x9E\x07\x70\xFC"
+ "\x65\xCE\x37\xC3\x2C\x95\x21\x8A"
+ "\xF3\x5C\xE8\x51\xBA\x23\xAF\x18"
+ "\x81\x0D\x76\xDF\x48\xD4\x3D\xA6"
+ "\x0F\x9B\x04\x6D\xF9\x62\xCB\x34"
+ "\xC0\x29\x92\x1E\x87\xF0\x59\xE5"
+ "\x4E\xB7\x20\xAC\x15\x7E\x0A\x73"
+ "\xDC\x45\xD1\x3A\xA3\x0C\x98\x01"
+ "\x6A\xF6\x5F\xC8\x31\xBD\x26\x8F"
+ "\x1B\x84\xED\x56\xE2\x4B\xB4\x1D"
+ "\xA9\x12\x7B\x07\x70\xD9\x42\xCE"
+ "\x37\xA0\x09\x95\xFE\x67\xF3\x5C"
+ "\xC5\x2E\xBA\x23\x8C\x18\x81\xEA"
+ "\x53\xDF\x48\xB1\x1A\xA6\x0F\x78"
+ "\x04\x6D\xD6\x3F\xCB\x34\x9D\x06"
+ "\x92\xFB\x64\xF0\x59\xC2\x2B\xB7"
+ "\x20\x89\x15\x7E\xE7\x50\xDC\x45"
+ "\xAE\x17\xA3\x0C\x75\x01\x6A\xD3"
+ "\x3C\xC8\x31\x9A\x03\x8F\xF8\x61"
+ "\xED\x56\xBF\x28\xB4\x1D\x86\x12",
+ .ctext = "\x97\x7f\x69\x0f\x0f\x34\xa6\x33"
+ "\x66\x49\x7e\xd0\x4d\x1b\xc9\x64"
+ "\xf9\x61\x95\x98\x11\x00\x88\xf8"
+ "\x2e\x88\x01\x0f\x2b\xe1\xae\x3e"
+ "\xfe\xd6\x47\x30\x11\x68\x7d\x99"
+ "\xad\x69\x6a\xe8\x41\x5f\x1e\x16"
+ "\x00\x3a\x47\xdf\x8e\x7d\x23\x1c"
+ "\x19\x5b\x32\x76\x60\x03\x05\xc1"
+ "\xa0\xff\xcf\xcc\x74\x39\x46\x63"
+ "\xfe\x5f\xa6\x35\xa7\xb4\xc1\xf9"
+ "\x4b\x5e\x38\xcc\x8c\xc1\xa2\xcf"
+ "\x9a\xc3\xae\x55\x42\x46\x93\xd9"
+ "\xbd\x22\xd3\x8a\x19\x96\xc3\xb3"
+ "\x7d\x03\x18\xf9\x45\x09\x9c\xc8"
+ "\x90\xf3\x22\xb3\x25\x83\x9a\x75"
+ "\xbb\x04\x48\x97\x3a\x63\x08\x04"
+ "\xa0\x69\xf6\x52\xd4\x89\x93\x69"
+ "\xb4\x33\xa2\x16\x58\xec\x4b\x26"
+ "\x76\x54\x10\x0b\x6e\x53\x1e\xbc"
+ "\x16\x18\x42\xb1\xb1\xd3\x4b\xda"
+ "\x06\x9f\x8b\x77\xf7\xab\xd6\xed"
+ "\xa3\x1d\x90\xda\x49\x38\x20\xb8"
+ "\x6c\xee\xae\x3e\xae\x6c\x03\xb8"
+ "\x0b\xed\xc8\xaa\x0e\xc5\x1f\x90"
+ "\x60\xe2\xec\x1b\x76\xd0\xcf\xda"
+ "\x29\x1b\xb8\x5a\xbc\xf4\xba\x13"
+ "\x91\xa6\xcb\x83\x3f\xeb\xe9\x7b"
+ "\x03\xba\x40\x9e\xe6\x7a\xb2\x4a"
+ "\x73\x49\xfc\xed\xfb\x55\xa4\x24"
+ "\xc7\xa4\xd7\x4b\xf5\xf7\x16\x62"
+ "\x80\xd3\x19\x31\x52\x25\xa8\x69"
+ "\xda\x9a\x87\xf5\xf2\xee\x5d\x61"
+ "\xc1\x12\x72\x3e\x52\x26\x45\x3a"
+ "\xd8\x9d\x57\xfa\x14\xe2\x9b\x2f"
+ "\xd4\xaa\x5e\x31\xf4\x84\x89\xa4"
+ "\xe3\x0e\xb0\x58\x41\x75\x6a\xcb"
+ "\x30\x01\x98\x90\x15\x80\xf5\x27"
+ "\x92\x13\x81\xf0\x1c\x1e\xfc\xb1"
+ "\x33\xf7\x63\xb0\x67\xec\x2e\x5c"
+ "\x85\xe3\x5b\xd0\x43\x8a\xb8\x5f"
+ "\x44\x9f\xec\x19\xc9\x8f\xde\xdf"
+ "\x79\xef\xf8\xee\x14\x87\xb3\x34"
+ "\x76\x00\x3a\x9b\xc7\xed\xb1\x3d"
+ "\xef\x07\xb0\xe4\xfd\x68\x9e\xeb"
+ "\xc2\xb4\x1a\x85\x9a\x7d\x11\x88"
+ "\xf8\xab\x43\x55\x2b\x8a\x4f\x60"
+ "\x85\x9a\xf4\xba\xae\x48\x81\xeb"
+ "\x93\x07\x97\x9e\xde\x2a\xfc\x4e"
+ "\x31\xde\xaa\x44\xf7\x2a\xc3\xee"
+ "\x60\xa2\x98\x2c\x0a\x88\x50\xc5"
+ "\x6d\x89\xd3\xe4\xb6\xa7\xf4\xb0"
+ "\xcf\x0e\x89\xe3\x5e\x8f\x82\xf4"
+ "\x9d\xd1\xa9\x51\x50\x8a\xd2\x18"
+ "\x07\xb2\xaa\x3b\x7f\x58\x9b\xf4"
+ "\xb7\x24\x39\xd3\x66\x2f\x1e\xc0"
+ "\x11\xa3\x56\x56\x2a\x10\x73\xbc"
+ "\xe1\x23\xbf\xa9\x37\x07\x9c\xc3"
+ "\xb2\xc9\xa8\x1c\x5b\x5c\x58\xa4"
+ "\x77\x02\x26\xad\xc3\x40\x11\x53"
+ "\x93\x68\x72\xde\x05\x8b\x10\xbc"
+ "\xa6\xd4\x1b\xd9\x27\xd8\x16\x12"
+ "\x61\x2b\x31\x2a\x44\x87\x96\x58",
+ .len = 496,
+ },
+};
+
#endif /* _CRYPTO_TESTMGR_H */
--
2.20.1

2019-06-19 16:30:33

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v3 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode

Add an accelerated version of the 'essiv(cbc(aes),aes,sha256)'
skcipher, which is used by fscrypt, and in some cases, by dm-crypt.
This avoids a separate call into the AES cipher for every invocation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/arm64/crypto/aes-glue.c | 129 ++++++++++++++++++++
arch/arm64/crypto/aes-modes.S | 99 +++++++++++++++
2 files changed, 228 insertions(+)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index f0ceb545bd1e..6dab2f062cea 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -12,6 +12,7 @@
#include <asm/hwcap.h>
#include <asm/simd.h>
#include <crypto/aes.h>
+#include <crypto/sha.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
@@ -34,6 +35,8 @@
#define aes_cbc_decrypt ce_aes_cbc_decrypt
#define aes_cbc_cts_encrypt ce_aes_cbc_cts_encrypt
#define aes_cbc_cts_decrypt ce_aes_cbc_cts_decrypt
+#define aes_essiv_cbc_encrypt ce_aes_essiv_cbc_encrypt
+#define aes_essiv_cbc_decrypt ce_aes_essiv_cbc_decrypt
#define aes_ctr_encrypt ce_aes_ctr_encrypt
#define aes_xts_encrypt ce_aes_xts_encrypt
#define aes_xts_decrypt ce_aes_xts_decrypt
@@ -50,6 +53,8 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions");
#define aes_cbc_decrypt neon_aes_cbc_decrypt
#define aes_cbc_cts_encrypt neon_aes_cbc_cts_encrypt
#define aes_cbc_cts_decrypt neon_aes_cbc_cts_decrypt
+#define aes_essiv_cbc_encrypt neon_aes_essiv_cbc_encrypt
+#define aes_essiv_cbc_decrypt neon_aes_essiv_cbc_decrypt
#define aes_ctr_encrypt neon_aes_ctr_encrypt
#define aes_xts_encrypt neon_aes_xts_encrypt
#define aes_xts_decrypt neon_aes_xts_decrypt
@@ -93,6 +98,13 @@ asmlinkage void aes_xts_decrypt(u8 out[], u8 const in[], u32 const rk1[],
int rounds, int blocks, u32 const rk2[], u8 iv[],
int first);

+asmlinkage void aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[],
+ int rounds, int blocks, u32 const rk2[],
+ u8 iv[], int first);
+asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
+ int rounds, int blocks, u32 const rk2[],
+ u8 iv[], int first);
+
asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds,
int blocks, u8 dg[], int enc_before,
int enc_after);
@@ -108,6 +120,12 @@ struct crypto_aes_xts_ctx {
struct crypto_aes_ctx __aligned(8) key2;
};

+struct crypto_aes_essiv_cbc_ctx {
+ struct crypto_aes_ctx key1;
+ struct crypto_aes_ctx __aligned(8) key2;
+ struct crypto_shash *hash;
+};
+
struct mac_tfm_ctx {
struct crypto_aes_ctx key;
u8 __aligned(8) consts[];
@@ -145,6 +163,31 @@ static int xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
return -EINVAL;
}

+static int essiv_cbc_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
+{
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ SHASH_DESC_ON_STACK(desc, ctx->hash);
+ u8 digest[SHA256_DIGEST_SIZE];
+ int ret;
+
+ ret = aes_expandkey(&ctx->key1, in_key, key_len);
+ if (ret)
+ goto out;
+
+ desc->tfm = ctx->hash;
+ crypto_shash_digest(desc, in_key, key_len, digest);
+
+ ret = aes_expandkey(&ctx->key2, digest, sizeof(digest));
+ if (ret)
+ goto out;
+
+ return 0;
+out:
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ return -EINVAL;
+}
+
static int ecb_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -361,6 +404,74 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
return skcipher_walk_done(&walk, 0);
}

+static int essiv_cbc_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ ctx->hash = crypto_alloc_shash("sha256", 0, 0);
+ if (IS_ERR(ctx->hash))
+ return PTR_ERR(ctx->hash);
+
+ return 0;
+}
+
+static void essiv_cbc_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_shash(ctx->hash);
+}
+
+static int essiv_cbc_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int err, first, rounds = 6 + ctx->key1.key_length / 4;
+ struct skcipher_walk walk;
+ u8 iv[AES_BLOCK_SIZE];
+ unsigned int blocks;
+
+ memcpy(iv, req->iv, crypto_skcipher_ivsize(tfm));
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
+ kernel_neon_begin();
+ aes_essiv_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
+ ctx->key1.key_enc, rounds, blocks,
+ ctx->key2.key_enc, iv, first);
+ kernel_neon_end();
+ err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
+static int essiv_cbc_decrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int err, first, rounds = 6 + ctx->key1.key_length / 4;
+ struct skcipher_walk walk;
+ u8 iv[AES_BLOCK_SIZE];
+ unsigned int blocks;
+
+ memcpy(iv, req->iv, crypto_skcipher_ivsize(tfm));
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
+ kernel_neon_begin();
+ aes_essiv_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
+ ctx->key1.key_dec, rounds, blocks,
+ ctx->key2.key_enc, iv, first);
+ kernel_neon_end();
+ err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
static int ctr_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -504,6 +615,24 @@ static struct skcipher_alg aes_algs[] = { {
.encrypt = cts_cbc_encrypt,
.decrypt = cts_cbc_decrypt,
.init = cts_cbc_init_tfm,
+}, {
+ .base = {
+ .cra_name = "__essiv(cbc(aes),aes,sha256)",
+ .cra_driver_name = "__essiv-cbc-aes-sha256-" MODE,
+ .cra_priority = PRIO + 1,
+ .cra_flags = CRYPTO_ALG_INTERNAL,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crypto_aes_essiv_cbc_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = sizeof(u64),
+ .setkey = essiv_cbc_set_key,
+ .encrypt = essiv_cbc_encrypt,
+ .decrypt = essiv_cbc_decrypt,
+ .init = essiv_cbc_init_tfm,
+ .exit = essiv_cbc_exit_tfm,
}, {
.base = {
.cra_name = "__ctr(aes)",
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
index 4c7ce231963c..4ebc61375aa6 100644
--- a/arch/arm64/crypto/aes-modes.S
+++ b/arch/arm64/crypto/aes-modes.S
@@ -247,6 +247,105 @@ AES_ENDPROC(aes_cbc_cts_decrypt)
.byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
.previous

+ /*
+ * aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[],
+ * int rounds, int blocks, u32 const rk2[],
+ * u8 iv[], int first);
+ * aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
+ * int rounds, int blocks, u32 const rk2[],
+ * u8 iv[], int first);
+ */
+
+AES_ENTRY(aes_essiv_cbc_encrypt)
+ ld1 {v4.16b}, [x6] /* get iv */
+ cbz x7, .Lessivcbcencnotfirst
+
+ mov w8, #14 /* AES-256: 14 rounds */
+ enc_prepare w8, x5, x7
+ mov v4.8b, v4.8b
+ encrypt_block v4, w8, x5, x7, w9
+
+.Lessivcbcencnotfirst:
+ enc_prepare w3, x2, x7
+.Lessivcbcencloop4x:
+ subs w4, w4, #4
+ bmi .Lessivcbcenc1x
+ ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 pt blocks */
+ eor v0.16b, v0.16b, v4.16b /* ..and xor with iv */
+ encrypt_block v0, w3, x2, x7, w8
+ eor v1.16b, v1.16b, v0.16b
+ encrypt_block v1, w3, x2, x7, w8
+ eor v2.16b, v2.16b, v1.16b
+ encrypt_block v2, w3, x2, x7, w8
+ eor v3.16b, v3.16b, v2.16b
+ encrypt_block v3, w3, x2, x7, w8
+ st1 {v0.16b-v3.16b}, [x0], #64
+ mov v4.16b, v3.16b
+ b .Lessivcbcencloop4x
+.Lessivcbcenc1x:
+ adds w4, w4, #4
+ beq .Lessivcbcencout
+.Lessivcbcencloop:
+ ld1 {v0.16b}, [x1], #16 /* get next pt block */
+ eor v4.16b, v4.16b, v0.16b /* ..and xor with iv */
+ encrypt_block v4, w3, x2, x6, w7
+ st1 {v4.16b}, [x0], #16
+ subs w4, w4, #1
+ bne .Lessivcbcencloop
+.Lessivcbcencout:
+ st1 {v4.16b}, [x6] /* return iv */
+ ret
+AES_ENDPROC(aes_essiv_cbc_encrypt)
+
+
+AES_ENTRY(aes_essiv_cbc_decrypt)
+ stp x29, x30, [sp, #-16]!
+ mov x29, sp
+
+ ld1 {v7.16b}, [x6] /* get iv */
+ cbz x7, .Lessivcbcdecnotfirst
+
+ mov w8, #14 /* AES-256: 14 rounds */
+ enc_prepare w8, x5, x7
+ mov v7.8b, v7.8b
+ encrypt_block v7, w8, x5, x7, w9
+
+.Lessivcbcdecnotfirst:
+ dec_prepare w3, x2, x7
+.LessivcbcdecloopNx:
+ subs w4, w4, #4
+ bmi .Lessivcbcdec1x
+ ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 ct blocks */
+ mov v4.16b, v0.16b
+ mov v5.16b, v1.16b
+ mov v6.16b, v2.16b
+ bl aes_decrypt_block4x
+ sub x1, x1, #16
+ eor v0.16b, v0.16b, v7.16b
+ eor v1.16b, v1.16b, v4.16b
+ ld1 {v7.16b}, [x1], #16 /* reload 1 ct block */
+ eor v2.16b, v2.16b, v5.16b
+ eor v3.16b, v3.16b, v6.16b
+ st1 {v0.16b-v3.16b}, [x0], #64
+ b .LessivcbcdecloopNx
+.Lessivcbcdec1x:
+ adds w4, w4, #4
+ beq .Lessivcbcdecout
+.Lessivcbcdecloop:
+ ld1 {v1.16b}, [x1], #16 /* get next ct block */
+ mov v0.16b, v1.16b /* ...and copy to v0 */
+ decrypt_block v0, w3, x2, x7, w8
+ eor v0.16b, v0.16b, v7.16b /* xor with iv => pt */
+ mov v7.16b, v1.16b /* ct is next iv */
+ st1 {v0.16b}, [x0], #16
+ subs w4, w4, #1
+ bne .Lessivcbcdecloop
+.Lessivcbcdecout:
+ st1 {v7.16b}, [x6] /* return iv */
+ ldp x29, x30, [sp], #16
+ ret
+AES_ENDPROC(aes_essiv_cbc_decrypt)
+

/*
* aes_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds,
--
2.20.1

2019-06-19 22:37:40

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode

On Wed, Jun 19, 2019 at 06:29:21PM +0200, Ard Biesheuvel wrote:
> Add an accelerated version of the 'essiv(cbc(aes),aes,sha256)'
> skcipher, which is used by fscrypt, and in some cases, by dm-crypt.
> This avoids a separate call into the AES cipher for every invocation.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>

I'm not sure we should bother with this, since fscrypt normally uses AES-256-XTS
for contents encryption. AES-128-CBC-ESSIV support was only added because
people wanted something that is fast on low-powered embedded devices with crypto
accelerators such as CAAM or CESA that don't support XTS.

In the case of Android, the CDD doesn't even allow AES-128-CBC-ESSIV with
file-based encryption (fscrypt). It's still the default for "full disk
encryption" (which uses dm-crypt), but that's being deprecated.

So maybe dm-crypt users will want this, but I don't think it's very useful for
fscrypt.

- Eric

2019-06-19 22:44:19

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode

On Thu, 20 Jun 2019 at 00:37, Eric Biggers <[email protected]> wrote:
>
> On Wed, Jun 19, 2019 at 06:29:21PM +0200, Ard Biesheuvel wrote:
> > Add an accelerated version of the 'essiv(cbc(aes),aes,sha256)'
> > skcipher, which is used by fscrypt, and in some cases, by dm-crypt.
> > This avoids a separate call into the AES cipher for every invocation.
> >
> > Signed-off-by: Ard Biesheuvel <[email protected]>
>
> I'm not sure we should bother with this, since fscrypt normally uses AES-256-XTS
> for contents encryption. AES-128-CBC-ESSIV support was only added because
> people wanted something that is fast on low-powered embedded devices with crypto
> accelerators such as CAAM or CESA that don't support XTS.
>
> In the case of Android, the CDD doesn't even allow AES-128-CBC-ESSIV with
> file-based encryption (fscrypt). It's still the default for "full disk
> encryption" (which uses dm-crypt), but that's being deprecated.
>
> So maybe dm-crypt users will want this, but I don't think it's very useful for
> fscrypt.
>

If nobody cares, we can drop it. I don't feel too strongly about this,
and since it is on the mailinglist now, people will be able to find it
and ask for it to be merged if they have a convincing use case.

2019-06-20 01:04:45

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Wed, Jun 19, 2019 at 06:29:16PM +0200, Ard Biesheuvel wrote:
> diff --git a/crypto/essiv.c b/crypto/essiv.c
> new file mode 100644
> index 000000000000..45e9d10b8614
> --- /dev/null
> +++ b/crypto/essiv.c
> @@ -0,0 +1,630 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ESSIV skcipher template for block encryption

skcipher and aead

A few sentences summary of what this file is for might also be useful to future
readers.

> + *
> + * Copyright (c) 2019 Linaro, Ltd. <[email protected]>
> + *
> + * Heavily based on:
> + * adiantum length-preserving encryption mode
> + *
> + * Copyright 2018 Google LLC
> + */
> +
> +#include <crypto/authenc.h>
> +#include <crypto/internal/aead.h>
> +#include <crypto/internal/hash.h>
> +#include <crypto/internal/skcipher.h>
> +#include <crypto/scatterwalk.h>
> +#include <linux/module.h>
> +
> +#include "internal.h"
> +
> +#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo
> +#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo

Why does the outer algorithm declare a smaller IV size? Shouldn't it just be
the same as the inner algorithm's?

> +struct essiv_instance_ctx {
> + union {
> + struct crypto_skcipher_spawn blockcipher_spawn;
> + struct crypto_aead_spawn aead_spawn;
> + } u;
> + struct crypto_spawn essiv_cipher_spawn;
> + struct crypto_shash_spawn hash_spawn;
> +};
> +
> +struct essiv_tfm_ctx {
> + union {
> + struct crypto_skcipher *blockcipher;
> + struct crypto_aead *aead;
> + } u;
> + struct crypto_cipher *essiv_cipher;
> + struct crypto_shash *hash;
> +};

Can you fix the naming: 'blockcipher' => 'skcipher' everywhere?

> +static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
> + const u8 *key, unsigned int keylen)
> +{
> + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
> + SHASH_DESC_ON_STACK(desc, tctx->hash);
> + unsigned int saltsize;
> + u8 *salt;
> + int err;
> +
> + crypto_skcipher_clear_flags(tctx->u.blockcipher, CRYPTO_TFM_REQ_MASK);
> + crypto_skcipher_set_flags(tctx->u.blockcipher,
> + crypto_skcipher_get_flags(tfm) &
> + CRYPTO_TFM_REQ_MASK);
> + err = crypto_skcipher_setkey(tctx->u.blockcipher, key, keylen);
> + crypto_skcipher_set_flags(tfm,
> + crypto_skcipher_get_flags(tctx->u.blockcipher) &
> + CRYPTO_TFM_RES_MASK);
> + if (err)
> + return err;
> +
> + saltsize = crypto_shash_digestsize(tctx->hash);
> + salt = kmalloc(saltsize, GFP_KERNEL);
> + if (!salt)
> + return -ENOMEM;

This could be a stack buffer of length HASH_MAX_DIGESTSIZE (64 bytes).
Same in essiv_aead_setkey().

> +
> + desc->tfm = tctx->hash;
> + crypto_shash_digest(desc, key, keylen, salt);

Need to check for error from crypto_shash_digest().

Similarly in essiv_aead_setkey().

> +static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
> +{
> + struct skcipher_instance *inst = skcipher_alg_instance(tfm);
> + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
> + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
> + struct crypto_skcipher *blockcipher;
> + unsigned int subreq_size;
> + int err;
> +
> + BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx,
> + blockcipher_req) !=
> + sizeof(struct essiv_skcipher_request_ctx));
> +
> + blockcipher = crypto_spawn_skcipher(&ictx->u.blockcipher_spawn);
> + if (IS_ERR(blockcipher))
> + return PTR_ERR(blockcipher);
> +
> + subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx,
> + blockcipher_req) +
> + crypto_skcipher_reqsize(blockcipher);
> +
> + crypto_skcipher_set_reqsize(tfm, offsetof(struct essiv_skcipher_request_ctx,
> + blockcipher_req) + subreq_size);
> +
> + err = essiv_init_tfm(ictx, tctx);
> + if (err)
> + crypto_free_skcipher(blockcipher);

Should return in this error case, rather than going ahead and setting
tctx->u.blockcipher.

> +
> + tctx->u.blockcipher = blockcipher;
> + return err;
> +}
> +
> +static int essiv_aead_init_tfm(struct crypto_aead *tfm)
> +{
> + struct aead_instance *inst = aead_alg_instance(tfm);
> + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
> + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
> + struct crypto_aead *aead;
> + unsigned int subreq_size;
> + int err;
> +
> + BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) !=
> + sizeof(struct essiv_aead_request_ctx));
> +
> + aead = crypto_spawn_aead(&ictx->u.aead_spawn);
> + if (IS_ERR(aead))
> + return PTR_ERR(aead);
> +
> + subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) +
> + crypto_aead_reqsize(aead);
> +
> + crypto_aead_set_reqsize(tfm, offsetof(struct essiv_aead_request_ctx,
> + aead_req) + subreq_size);
> +
> + err = essiv_init_tfm(ictx, tctx);
> + if (err)
> + crypto_free_aead(aead);

Same here.

> +static bool essiv_supported_algorithms(struct crypto_alg *essiv_cipher_alg,
> + struct shash_alg *hash_alg,
> + int ivsize)
> +{
> + if (hash_alg->digestsize < essiv_cipher_alg->cra_cipher.cia_min_keysize ||
> + hash_alg->digestsize > essiv_cipher_alg->cra_cipher.cia_max_keysize)
> + return false;
> +
> + if (ivsize != essiv_cipher_alg->cra_blocksize)
> + return false;
> +
> + if (ivsize > MAX_INNER_IV_SIZE)
> + return false;
> +
> + return true;
> +}

Also check that the hash algorithm is unkeyed?

> +
> +static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
> +{
> + struct crypto_attr_type *algt;
> + const char *blockcipher_name;
> + const char *essiv_cipher_name;
> + const char *shash_name;
> + struct skcipher_instance *skcipher_inst = NULL;
> + struct aead_instance *aead_inst = NULL;
> + struct crypto_instance *inst;
> + struct crypto_alg *base, *block_base;
> + struct essiv_instance_ctx *ictx;
> + struct skcipher_alg *blockcipher_alg = NULL;
> + struct aead_alg *aead_alg = NULL;
> + struct crypto_alg *essiv_cipher_alg;
> + struct crypto_alg *_hash_alg;
> + struct shash_alg *hash_alg;
> + int ivsize;
> + u32 type;
> + int err;
> +
> + algt = crypto_get_attr_type(tb);
> + if (IS_ERR(algt))
> + return PTR_ERR(algt);
> +
> + blockcipher_name = crypto_attr_alg_name(tb[1]);
> + if (IS_ERR(blockcipher_name))
> + return PTR_ERR(blockcipher_name);
> +
> + essiv_cipher_name = crypto_attr_alg_name(tb[2]);
> + if (IS_ERR(essiv_cipher_name))
> + return PTR_ERR(essiv_cipher_name);
> +
> + shash_name = crypto_attr_alg_name(tb[3]);
> + if (IS_ERR(shash_name))
> + return PTR_ERR(shash_name);
> +
> + type = algt->type & algt->mask;
> +
> + switch (type) {
> + case CRYPTO_ALG_TYPE_BLKCIPHER:
> + skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
> + sizeof(*ictx), GFP_KERNEL);
> + if (!skcipher_inst)
> + return -ENOMEM;
> + inst = skcipher_crypto_instance(skcipher_inst);
> + base = &skcipher_inst->alg.base;
> + ictx = crypto_instance_ctx(inst);
> +
> + /* Block cipher, e.g. "cbc(aes)" */
> + crypto_set_skcipher_spawn(&ictx->u.blockcipher_spawn, inst);
> + err = crypto_grab_skcipher(&ictx->u.blockcipher_spawn,
> + blockcipher_name, 0,
> + crypto_requires_sync(algt->type,
> + algt->mask));
> + if (err)
> + goto out_free_inst;
> + blockcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.blockcipher_spawn);
> + block_base = &blockcipher_alg->base;
> + ivsize = blockcipher_alg->ivsize;

This may need to be crypto_skcipher_alg_ivsize(), since the "skcipher" algorithm
could actually be a "blkcipher" or "ablkcipher".

> +out_drop_blockcipher:
> + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
> + crypto_drop_skcipher(&ictx->u.blockcipher_spawn);
> + } else {
> + crypto_drop_aead(&ictx->u.aead_spawn);
> + }

Unnecessary braces.

Thanks,

- Eric

2019-06-20 01:14:21

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Wed, Jun 19, 2019 at 06:04:17PM -0700, Eric Biggers wrote:
>
> > +#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo
> > +#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo
>
> Why does the outer algorithm declare a smaller IV size? Shouldn't it just be
> the same as the inner algorithm's?

In general we allow outer algorithms to have distinct IV sizes
compared to the inner algorithm. For example, rfc4106 has a
different IV size compared to gcm.

In this case, the outer IV size is the block number so that's
presumably why 64 bits is sufficient. Do you forsee a case where
we need 128-bit block numbers?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-20 01:19:01

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, Jun 20, 2019 at 09:13:25AM +0800, Herbert Xu wrote:
> On Wed, Jun 19, 2019 at 06:04:17PM -0700, Eric Biggers wrote:
> >
> > > +#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo
> > > +#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo
> >
> > Why does the outer algorithm declare a smaller IV size? Shouldn't it just be
> > the same as the inner algorithm's?
>
> In general we allow outer algorithms to have distinct IV sizes
> compared to the inner algorithm. For example, rfc4106 has a
> different IV size compared to gcm.
>
> In this case, the outer IV size is the block number so that's
> presumably why 64 bits is sufficient. Do you forsee a case where
> we need 128-bit block numbers?

Actually this reminds me, the essiv template needs to be able to
handle multiple blocks/sectors, as otherwise this will still only
be able to push a single block/sector to the hardware at a time.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-20 07:08:38

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On Wed, Jun 19, 2019 at 7:29 PM Ard Biesheuvel
<[email protected]> wrote:
>
> This series creates an ESSIV template that produces a skcipher or AEAD
> transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
> (or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
> encapsulated sync or async skcipher/aead by passing through all operations,
> while using the cipher/shash pair to transform the input IV into an ESSIV
> output IV.
>
> This matches what both users of ESSIV in the kernel do, and so it is proposed
> as a replacement for those, in patches #2 and #4.
>
> This code has been tested using the fscrypt test suggested by Eric
> (generic/549), as well as the mode-test script suggested by Milan for
> the dm-crypt case. I also tested the aead case in a virtual machine,
> but it definitely needs some wider testing from the dm-crypt experts.
>
> Changes since v2:
> - fixed a couple of bugs that snuck in after I'd done the bulk of my
> testing
> - some cosmetic tweaks to the ESSIV template skcipher setkey function
> to align it with the aead one
> - add a test case for essiv(cbc(aes),aes,sha256)
> - add an accelerated implementation for arm64 that combines the IV
> derivation and the actual en/decryption in a single asm routine
>
> Scroll down for tcrypt speed test result comparing the essiv template
> with the asm implementation. Bare cbc(aes) tests included for reference
> as well. Taken on a 2GHz Cortex-A57 (AMD Seattle)
>
> Code can be found here
> https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=essiv-v3


Thank you Ard for this work. It is very useful. I am testing this now
with the essiv implementation inside CryptoCell.

One possible future optimization this opens the door for is having the
template auto-increment the sector number.

This will allow the device manager or fscrypt code to ask for crypto
services on buffer spanning over a single sector size
and have the crypto code automatically increment the sector number
when processing the buffer.

This may potentially shave a few cycles because it can potentially
turn multiple calls into the crypto API in one, giving
the crypto code a larger buffer to work on.

This is actually supported by CryptoCell hardware and to the best of
my knowledge also by a similar HW from Qualcomm
via out-of-tree patches found in the Android tree.

If this makes sense to you perhaps it is a good idea to have the
template format be:

<skcipher>,<cipher>,<shash>, <sector size>

Where for now we will only support a sector size of '0' (i.e. do not
auto-increment) and later extend or am I over engineering? :-)

Thanks,
Gilad


--
Gilad Ben-Yossef
Chief Coffee Drinker

values of β will give rise to dom!

2019-06-20 07:31:15

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, 20 Jun 2019 at 03:14, Herbert Xu <[email protected]> wrote:
>
> On Wed, Jun 19, 2019 at 06:04:17PM -0700, Eric Biggers wrote:
> >
> > > +#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo
> > > +#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo
> >
> > Why does the outer algorithm declare a smaller IV size? Shouldn't it just be
> > the same as the inner algorithm's?
>
> In general we allow outer algorithms to have distinct IV sizes
> compared to the inner algorithm. For example, rfc4106 has a
> different IV size compared to gcm.
>
> In this case, the outer IV size is the block number so that's
> presumably why 64 bits is sufficient. Do you forsee a case where
> we need 128-bit block numbers?
>

Indeed, the whole point of this template is that it turns a 64-bit
sector number into a n-bit IV, where n equals the block size of the
essiv cipher, and its min/max keysize covers the digest size of the
shash.

I don't think it makes sense to generalize this further, and if I
understand the feedback from Herbert and Gilad correctly, it would
even be better to define the input IV as a LE 64-bit counter
explicitly, so we can auto increment it between sectors.

But that leaves the question how to convey the sector size to the
template. Gilad suggests

essiv(cbc(aes),aes,sha256,xxx)

where xxx is the sector size, and incoming requests whose cryptlen is
an exact multiple of the sector size will have their LE counter auto
incremented between sectors. Note that we could make it optional for
now, and default to 4k, but I will at least have to parse the argument
if it is present and reject values != 4096

Is this the right approach? Or are there better ways to convey this
information when instantiating the template?
Also, it seems to me that the dm-crypt and fscrypt layers would
require major surgery in order to take advantage of this.

2019-06-20 11:23:27

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On 19/06/2019 18:29, Ard Biesheuvel wrote:
> This series creates an ESSIV template that produces a skcipher or AEAD
> transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
> (or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
> encapsulated sync or async skcipher/aead by passing through all operations,
> while using the cipher/shash pair to transform the input IV into an ESSIV
> output IV.
>
> This matches what both users of ESSIV in the kernel do, and so it is proposed
> as a replacement for those, in patches #2 and #4.
>
> This code has been tested using the fscrypt test suggested by Eric
> (generic/549), as well as the mode-test script suggested by Milan for
> the dm-crypt case. I also tested the aead case in a virtual machine,
> but it definitely needs some wider testing from the dm-crypt experts.
>
> Changes since v2:
> - fixed a couple of bugs that snuck in after I'd done the bulk of my
> testing
> - some cosmetic tweaks to the ESSIV template skcipher setkey function
> to align it with the aead one
> - add a test case for essiv(cbc(aes),aes,sha256)
> - add an accelerated implementation for arm64 that combines the IV
> derivation and the actual en/decryption in a single asm routine

I run tests for the whole patchset, including some older scripts and seems
it works for dm-crypt now.

For the new CRYPTO_ESSIV option - dm-crypt must unconditionally
select it (we rely on all IV generators availability in userspace),
but that's already done in patch 4.

Thanks,
Milan

2019-06-20 11:30:19

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode


On 20/06/2019 00:37, Eric Biggers wrote:
> On Wed, Jun 19, 2019 at 06:29:21PM +0200, Ard Biesheuvel wrote:
>> Add an accelerated version of the 'essiv(cbc(aes),aes,sha256)'
>> skcipher, which is used by fscrypt, and in some cases, by dm-crypt.
>> This avoids a separate call into the AES cipher for every invocation.
>>
>> Signed-off-by: Ard Biesheuvel <[email protected]>
>
> I'm not sure we should bother with this, since fscrypt normally uses AES-256-XTS
> for contents encryption. AES-128-CBC-ESSIV support was only added because
> people wanted something that is fast on low-powered embedded devices with crypto
> accelerators such as CAAM or CESA that don't support XTS.
>
> In the case of Android, the CDD doesn't even allow AES-128-CBC-ESSIV with
> file-based encryption (fscrypt). It's still the default for "full disk
> encryption" (which uses dm-crypt), but that's being deprecated.
>
> So maybe dm-crypt users will want this, but I don't think it's very useful for
> fscrypt.

The aes-cbc-essiv:sha256 is still default for plain cryptsetup devices
(LUKS uses XTS for several years as a default already).

The reason is compatibility with older distros (if there is no cipher mode
specification in crypttab for plain device, switching default could cause data corruption).

But I think initscripts now enforce cipher and keysize crypttab options for some time,
so I can probably switch the default to XTS for plain devices soon.
(We have already compile time option for it anyway.)

IOW intention for dm-crypt is to slightly deprecate CBC mode use for all types of devices.

Milan

2019-06-20 11:55:14

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On Thu, 20 Jun 2019 at 13:22, Milan Broz <[email protected]> wrote:
>
> On 19/06/2019 18:29, Ard Biesheuvel wrote:
> > This series creates an ESSIV template that produces a skcipher or AEAD
> > transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
> > (or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
> > encapsulated sync or async skcipher/aead by passing through all operations,
> > while using the cipher/shash pair to transform the input IV into an ESSIV
> > output IV.
> >
> > This matches what both users of ESSIV in the kernel do, and so it is proposed
> > as a replacement for those, in patches #2 and #4.
> >
> > This code has been tested using the fscrypt test suggested by Eric
> > (generic/549), as well as the mode-test script suggested by Milan for
> > the dm-crypt case. I also tested the aead case in a virtual machine,
> > but it definitely needs some wider testing from the dm-crypt experts.
> >
> > Changes since v2:
> > - fixed a couple of bugs that snuck in after I'd done the bulk of my
> > testing
> > - some cosmetic tweaks to the ESSIV template skcipher setkey function
> > to align it with the aead one
> > - add a test case for essiv(cbc(aes),aes,sha256)
> > - add an accelerated implementation for arm64 that combines the IV
> > derivation and the actual en/decryption in a single asm routine
>
> I run tests for the whole patchset, including some older scripts and seems
> it works for dm-crypt now.
>

Thanks Milan, that is really helpful.

Does this include configurations that combine authenc with essiv?

> For the new CRYPTO_ESSIV option - dm-crypt must unconditionally
> select it (we rely on all IV generators availability in userspace),
> but that's already done in patch 4.
>

Indeed.

2019-06-20 12:10:10

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On 20/06/2019 13:54, Ard Biesheuvel wrote:
> On Thu, 20 Jun 2019 at 13:22, Milan Broz <[email protected]> wrote:
>>
>> On 19/06/2019 18:29, Ard Biesheuvel wrote:
>>> This series creates an ESSIV template that produces a skcipher or AEAD
>>> transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
>>> (or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
>>> encapsulated sync or async skcipher/aead by passing through all operations,
>>> while using the cipher/shash pair to transform the input IV into an ESSIV
>>> output IV.
>>>
>>> This matches what both users of ESSIV in the kernel do, and so it is proposed
>>> as a replacement for those, in patches #2 and #4.
>>>
>>> This code has been tested using the fscrypt test suggested by Eric
>>> (generic/549), as well as the mode-test script suggested by Milan for
>>> the dm-crypt case. I also tested the aead case in a virtual machine,
>>> but it definitely needs some wider testing from the dm-crypt experts.
>>>
>>> Changes since v2:
>>> - fixed a couple of bugs that snuck in after I'd done the bulk of my
>>> testing
>>> - some cosmetic tweaks to the ESSIV template skcipher setkey function
>>> to align it with the aead one
>>> - add a test case for essiv(cbc(aes),aes,sha256)
>>> - add an accelerated implementation for arm64 that combines the IV
>>> derivation and the actual en/decryption in a single asm routine
>>
>> I run tests for the whole patchset, including some older scripts and seems
>> it works for dm-crypt now.
>>
>
> Thanks Milan, that is really helpful.
>
> Does this include configurations that combine authenc with essiv?

Hm, seems that we are missing these in luks2-integrity-test. I'll add them there.

I also used this older test
https://gitlab.com/omos/dm-crypt-test-scripts/blob/master/root/test_dmintegrity.sh

(just aes-gcm-random need to be commented out, we never supported this format, it was
written for some devel version)

But seems ESSIV is there tested only without AEAD composition...

So yes, this AEAD part need more testing.

Milan

2019-06-20 12:52:47

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
>
> Is this the right approach? Or are there better ways to convey this
> information when instantiating the template?
> Also, it seems to me that the dm-crypt and fscrypt layers would
> require major surgery in order to take advantage of this.

My preference would be to encode the sector size into the key.
Hardware that can only support some sector sizes can use fallbacks
as usual.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-20 12:54:01

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
>
> Is this the right approach? Or are there better ways to convey this
> information when instantiating the template?
> Also, it seems to me that the dm-crypt and fscrypt layers would
> require major surgery in order to take advantage of this.

Oh and you don't have to make dm-crypt use it from the start. That
is, you can just make things simple by doing it one sector at a
time in the dm-crypt code even though the underlying essiv code
supports multiple sectors.

Someone who cares about this is sure to come along and fix it later.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-20 13:02:34

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, 20 Jun 2019 at 14:53, Herbert Xu <[email protected]> wrote:
>
> On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
> >
> > Is this the right approach? Or are there better ways to convey this
> > information when instantiating the template?
> > Also, it seems to me that the dm-crypt and fscrypt layers would
> > require major surgery in order to take advantage of this.
>
> Oh and you don't have to make dm-crypt use it from the start. That
> is, you can just make things simple by doing it one sector at a
> time in the dm-crypt code even though the underlying essiv code
> supports multiple sectors.
>
> Someone who cares about this is sure to come along and fix it later.
>

It also depend on how realistic it is that we will need to support
arbitrary sector sizes in the future. I mean, if we decide today that
essiv() uses an implicit sector size of 4k, we can always add
essiv64k() later, rather than adding lots of complexity now that we
are never going to use. Note that ESSIV is already more or less
deprecated, so there is really no point in inventing these weird and
wonderful things if we want people to move to XTS and plain IV
generation instead.

2019-06-20 13:15:39

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On 20/06/2019 14:09, Milan Broz wrote:
> On 20/06/2019 13:54, Ard Biesheuvel wrote:
>> On Thu, 20 Jun 2019 at 13:22, Milan Broz <[email protected]> wrote:
>>>
>>> On 19/06/2019 18:29, Ard Biesheuvel wrote:
>>>> This series creates an ESSIV template that produces a skcipher or AEAD
>>>> transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
>>>> (or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
>>>> encapsulated sync or async skcipher/aead by passing through all operations,
>>>> while using the cipher/shash pair to transform the input IV into an ESSIV
>>>> output IV.
>>>>
>>>> This matches what both users of ESSIV in the kernel do, and so it is proposed
>>>> as a replacement for those, in patches #2 and #4.
>>>>
>>>> This code has been tested using the fscrypt test suggested by Eric
>>>> (generic/549), as well as the mode-test script suggested by Milan for
>>>> the dm-crypt case. I also tested the aead case in a virtual machine,
>>>> but it definitely needs some wider testing from the dm-crypt experts.
>>>>
>>>> Changes since v2:
>>>> - fixed a couple of bugs that snuck in after I'd done the bulk of my
>>>> testing
>>>> - some cosmetic tweaks to the ESSIV template skcipher setkey function
>>>> to align it with the aead one
>>>> - add a test case for essiv(cbc(aes),aes,sha256)
>>>> - add an accelerated implementation for arm64 that combines the IV
>>>> derivation and the actual en/decryption in a single asm routine
>>>
>>> I run tests for the whole patchset, including some older scripts and seems
>>> it works for dm-crypt now.
>>>
>>
>> Thanks Milan, that is really helpful.
>>
>> Does this include configurations that combine authenc with essiv?
>
> Hm, seems that we are missing these in luks2-integrity-test. I'll add them there.
>
> I also used this older test
> https://gitlab.com/omos/dm-crypt-test-scripts/blob/master/root/test_dmintegrity.sh
>
> (just aes-gcm-random need to be commented out, we never supported this format, it was
> written for some devel version)
>
> But seems ESSIV is there tested only without AEAD composition...
>
> So yes, this AEAD part need more testing.

And unfortunately it does not work - it returns EIO on sectors where it should not be data corruption.

I added few lines with length-preserving mode with ESSIV + AEAD, please could you run luks2-integrity-test
in cryptsetup upstream?

This patch adds the tests:
https://gitlab.com/cryptsetup/cryptsetup/commit/4c74ff5e5ae328cb61b44bf99f98d08ffee3366a

It is ok on mainline kernel, fails with the patchset:

# ./luks2-integrity-test
[aes-cbc-essiv:sha256:hmac-sha256:128:512][FORMAT][ACTIVATE]sha256sum: /dev/mapper/dmi_test: Input/output error
[FAIL]
Expecting ee501705a084cd0ab6f4a28014bcf62b8bfa3434de00b82743c50b3abf06232c got .

FAILED backtrace:
77 ./luks2-integrity-test
112 intformat ./luks2-integrity-test
127 main ./luks2-integrity-test

Milan

2019-06-20 13:36:24

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, 20 Jun 2019 at 15:02, Ard Biesheuvel <[email protected]> wrote:
>
> On Thu, 20 Jun 2019 at 14:53, Herbert Xu <[email protected]> wrote:
> >
> > On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
> > >
> > > Is this the right approach? Or are there better ways to convey this
> > > information when instantiating the template?
> > > Also, it seems to me that the dm-crypt and fscrypt layers would
> > > require major surgery in order to take advantage of this.
> >
> > Oh and you don't have to make dm-crypt use it from the start. That
> > is, you can just make things simple by doing it one sector at a
> > time in the dm-crypt code even though the underlying essiv code
> > supports multiple sectors.
> >
> > Someone who cares about this is sure to come along and fix it later.
> >
>
> It also depend on how realistic it is that we will need to support
> arbitrary sector sizes in the future. I mean, if we decide today that
> essiv() uses an implicit sector size of 4k, we can always add
> essiv64k() later, rather than adding lots of complexity now that we
> are never going to use. Note that ESSIV is already more or less
> deprecated, so there is really no point in inventing these weird and
> wonderful things if we want people to move to XTS and plain IV
> generation instead.

Never mind, the sector size is already variable ...

2019-06-20 13:41:08

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, Jun 20, 2019 at 03:02:04PM +0200, Ard Biesheuvel wrote:
>
> It also depend on how realistic it is that we will need to support
> arbitrary sector sizes in the future. I mean, if we decide today that
> essiv() uses an implicit sector size of 4k, we can always add
> essiv64k() later, rather than adding lots of complexity now that we
> are never going to use. Note that ESSIV is already more or less
> deprecated, so there is really no point in inventing these weird and
> wonderful things if we want people to move to XTS and plain IV
> generation instead.

Well whatever we do for ESSIV should also extend to other IV
generators in dm-crypt so that potentially we can have a single
interface for dm-crypt multi-sector processing in future (IOW
you don't have special code for ESSIV vs. other algos).

That is why we should get the ESSIV interface right as it could
serve as an example for future implementations.

What do the dm-crypt people think? Are you ever going to need
processing in units other than 4K?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-20 13:53:04

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On Thu, 20 Jun 2019 at 15:14, Milan Broz <[email protected]> wrote:
>
> On 20/06/2019 14:09, Milan Broz wrote:
> > On 20/06/2019 13:54, Ard Biesheuvel wrote:
> >> On Thu, 20 Jun 2019 at 13:22, Milan Broz <[email protected]> wrote:
> >>>
> >>> On 19/06/2019 18:29, Ard Biesheuvel wrote:
> >>>> This series creates an ESSIV template that produces a skcipher or AEAD
> >>>> transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
> >>>> (or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
> >>>> encapsulated sync or async skcipher/aead by passing through all operations,
> >>>> while using the cipher/shash pair to transform the input IV into an ESSIV
> >>>> output IV.
> >>>>
> >>>> This matches what both users of ESSIV in the kernel do, and so it is proposed
> >>>> as a replacement for those, in patches #2 and #4.
> >>>>
> >>>> This code has been tested using the fscrypt test suggested by Eric
> >>>> (generic/549), as well as the mode-test script suggested by Milan for
> >>>> the dm-crypt case. I also tested the aead case in a virtual machine,
> >>>> but it definitely needs some wider testing from the dm-crypt experts.
> >>>>
> >>>> Changes since v2:
> >>>> - fixed a couple of bugs that snuck in after I'd done the bulk of my
> >>>> testing
> >>>> - some cosmetic tweaks to the ESSIV template skcipher setkey function
> >>>> to align it with the aead one
> >>>> - add a test case for essiv(cbc(aes),aes,sha256)
> >>>> - add an accelerated implementation for arm64 that combines the IV
> >>>> derivation and the actual en/decryption in a single asm routine
> >>>
> >>> I run tests for the whole patchset, including some older scripts and seems
> >>> it works for dm-crypt now.
> >>>
> >>
> >> Thanks Milan, that is really helpful.
> >>
> >> Does this include configurations that combine authenc with essiv?
> >
> > Hm, seems that we are missing these in luks2-integrity-test. I'll add them there.
> >
> > I also used this older test
> > https://gitlab.com/omos/dm-crypt-test-scripts/blob/master/root/test_dmintegrity.sh
> >
> > (just aes-gcm-random need to be commented out, we never supported this format, it was
> > written for some devel version)
> >
> > But seems ESSIV is there tested only without AEAD composition...
> >
> > So yes, this AEAD part need more testing.
>
> And unfortunately it does not work - it returns EIO on sectors where it should not be data corruption.
>
> I added few lines with length-preserving mode with ESSIV + AEAD, please could you run luks2-integrity-test
> in cryptsetup upstream?
>
> This patch adds the tests:
> https://gitlab.com/cryptsetup/cryptsetup/commit/4c74ff5e5ae328cb61b44bf99f98d08ffee3366a
>
> It is ok on mainline kernel, fails with the patchset:
>
> # ./luks2-integrity-test
> [aes-cbc-essiv:sha256:hmac-sha256:128:512][FORMAT][ACTIVATE]sha256sum: /dev/mapper/dmi_test: Input/output error
> [FAIL]
> Expecting ee501705a084cd0ab6f4a28014bcf62b8bfa3434de00b82743c50b3abf06232c got .
>
> FAILED backtrace:
> 77 ./luks2-integrity-test
> 112 intformat ./luks2-integrity-test
> 127 main ./luks2-integrity-test
>

OK, I will investigate.

I did my testing in a VM using a volume that was created using a
distro kernel, and mounted and used it using a kernel with these
changes applied.

Likewise, if I take a working key.img and mode-test.img, i can mount
it and use it on the system running these patches.

I noticed that this test uses algif_skcipher not algif_aead when it
formats the volume, and so I wonder if the way userland creates the
image is affected by this?

2019-06-20 13:54:14

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, 20 Jun 2019 at 15:40, Herbert Xu <[email protected]> wrote:
>
> On Thu, Jun 20, 2019 at 03:02:04PM +0200, Ard Biesheuvel wrote:
> >
> > It also depend on how realistic it is that we will need to support
> > arbitrary sector sizes in the future. I mean, if we decide today that
> > essiv() uses an implicit sector size of 4k, we can always add
> > essiv64k() later, rather than adding lots of complexity now that we
> > are never going to use. Note that ESSIV is already more or less
> > deprecated, so there is really no point in inventing these weird and
> > wonderful things if we want people to move to XTS and plain IV
> > generation instead.
>
> Well whatever we do for ESSIV should also extend to other IV
> generators in dm-crypt so that potentially we can have a single
> interface for dm-crypt multi-sector processing in future (IOW
> you don't have special code for ESSIV vs. other algos).
>
> That is why we should get the ESSIV interface right as it could
> serve as an example for future implementations.
>
> What do the dm-crypt people think? Are you ever going to need
> processing in units other than 4K?
>

We'd need at least 512 and 4k for dm-crypt, but I don't think the
sector size is limited at all tbh

2019-06-20 18:27:51

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, Jun 20, 2019 at 09:30:41AM +0200, Ard Biesheuvel wrote:
> On Thu, 20 Jun 2019 at 03:14, Herbert Xu <[email protected]> wrote:
> >
> > On Wed, Jun 19, 2019 at 06:04:17PM -0700, Eric Biggers wrote:
> > >
> > > > +#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo
> > > > +#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo
> > >
> > > Why does the outer algorithm declare a smaller IV size? Shouldn't it just be
> > > the same as the inner algorithm's?
> >
> > In general we allow outer algorithms to have distinct IV sizes
> > compared to the inner algorithm. For example, rfc4106 has a
> > different IV size compared to gcm.
> >
> > In this case, the outer IV size is the block number so that's
> > presumably why 64 bits is sufficient. Do you forsee a case where
> > we need 128-bit block numbers?
> >
>
> Indeed, the whole point of this template is that it turns a 64-bit
> sector number into a n-bit IV, where n equals the block size of the
> essiv cipher, and its min/max keysize covers the digest size of the
> shash.
>
> I don't think it makes sense to generalize this further, and if I
> understand the feedback from Herbert and Gilad correctly, it would
> even be better to define the input IV as a LE 64-bit counter
> explicitly, so we can auto increment it between sectors.
>

I was understanding ESSIV at a more abstract level, where you pass in some IV
(which may or may not contain a sector number of some particular length and
endianness) and it encrypts it.

I see that both fscrypt and dm-crypt use the convention of a __le64 sector
number though, so it's probably reasonable to define the IV to be that. A brief
comment explaining this might be helpful, though.

- Eric

2019-06-21 01:07:23

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Thu, Jun 20, 2019 at 03:53:45PM +0200, Ard Biesheuvel wrote:
>
> We'd need at least 512 and 4k for dm-crypt, but I don't think the
> sector size is limited at all tbh

In that case my preference would be to encode this into the key
and hardware that encounters unsupported sector sizes can use a
fallback.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-21 05:40:02

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On Fri, 21 Jun 2019 at 03:07, Herbert Xu <[email protected]> wrote:
>
> On Thu, Jun 20, 2019 at 03:53:45PM +0200, Ard Biesheuvel wrote:
> >
> > We'd need at least 512 and 4k for dm-crypt, but I don't think the
> > sector size is limited at all tbh
>
> In that case my preference would be to encode this into the key
> and hardware that encounters unsupported sector sizes can use a
> fallback.
>

OTOH, it also depends on what makes sense to implement in practice.

Gilad, I suppose sector size 512 is an obvious win, since the OS
always fetches at least 8 consective ones at a time. Do you see a
benefit for other sector sizes as well?

2019-06-21 06:44:55

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] crypto: essiv - create wrapper template for ESSIV generation

On 20/06/2019 15:40, Herbert Xu wrote:
> On Thu, Jun 20, 2019 at 03:02:04PM +0200, Ard Biesheuvel wrote:
>>
>> It also depend on how realistic it is that we will need to support
>> arbitrary sector sizes in the future. I mean, if we decide today that
>> essiv() uses an implicit sector size of 4k, we can always add
>> essiv64k() later, rather than adding lots of complexity now that we
>> are never going to use. Note that ESSIV is already more or less
>> deprecated, so there is really no point in inventing these weird and
>> wonderful things if we want people to move to XTS and plain IV
>> generation instead.
>
> Well whatever we do for ESSIV should also extend to other IV
> generators in dm-crypt so that potentially we can have a single
> interface for dm-crypt multi-sector processing in future (IOW
> you don't have special code for ESSIV vs. other algos).
>
> That is why we should get the ESSIV interface right as it could
> serve as an example for future implementations.
>
> What do the dm-crypt people think? Are you ever going to need
> processing in units other than 4K?

For the "technical" limit, dm-crypt supports 512, 1024, 2048 and 4096-byte encryption
sector size (power of two) since commit 8f0009a225171cc1b76a6b443de5137b26e1374b.

As the commit says, the 4k limit is because of page limit where whole IO must fit
(4k is minimal page size).
I do not want to introduce devices that are created on some architecture
and cannot be opened elsewhere with a smaller page size.
But maybe some reason appears, or there is some trick we did not tried...

(I guess fs has the same limits?)

Milan

2019-06-21 07:01:45

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On 20/06/2019 15:52, Ard Biesheuvel wrote:
>>>> Does this include configurations that combine authenc with essiv?
>>>
>>> Hm, seems that we are missing these in luks2-integrity-test. I'll add them there.
>>>
>>> I also used this older test
>>> https://gitlab.com/omos/dm-crypt-test-scripts/blob/master/root/test_dmintegrity.sh
>>>
>>> (just aes-gcm-random need to be commented out, we never supported this format, it was
>>> written for some devel version)
>>>
>>> But seems ESSIV is there tested only without AEAD composition...
>>>
>>> So yes, this AEAD part need more testing.
>>
>> And unfortunately it does not work - it returns EIO on sectors where it should not be data corruption.
>>
>> I added few lines with length-preserving mode with ESSIV + AEAD, please could you run luks2-integrity-test
>> in cryptsetup upstream?
>>
>> This patch adds the tests:
>> https://gitlab.com/cryptsetup/cryptsetup/commit/4c74ff5e5ae328cb61b44bf99f98d08ffee3366a
>>
>> It is ok on mainline kernel, fails with the patchset:
>>
>> # ./luks2-integrity-test
>> [aes-cbc-essiv:sha256:hmac-sha256:128:512][FORMAT][ACTIVATE]sha256sum: /dev/mapper/dmi_test: Input/output error
>> [FAIL]
>> Expecting ee501705a084cd0ab6f4a28014bcf62b8bfa3434de00b82743c50b3abf06232c got .
>>
>> FAILED backtrace:
>> 77 ./luks2-integrity-test
>> 112 intformat ./luks2-integrity-test
>> 127 main ./luks2-integrity-test
>>
>
> OK, I will investigate.
>
> I did my testing in a VM using a volume that was created using a
> distro kernel, and mounted and used it using a kernel with these
> changes applied.
>
> Likewise, if I take a working key.img and mode-test.img, i can mount
> it and use it on the system running these patches.
>
> I noticed that this test uses algif_skcipher not algif_aead when it
> formats the volume, and so I wonder if the way userland creates the
> image is affected by this?

Not sure if I understand the question, but I do not think userspace even touch data area here
(except direct-io wiping after the format, but it does not read it back).

It only encrypts keyslots - and here we cannot use AEAD (in fact it is already
authenticated by a LUKS digest).

So if the data area uses AEAD (or composition of length-preserving mode and
some authentication tag like HMAC), we fallback to non-AEAD for keyslot encryption.

In short, to test it, you need to activate device (that works ok with your patches)
and *access* the data, testing LUKS format and just keyslot access will never use AEAD.

So init the data by direct-io writes, and try to read them back (with dd).

For testing data on dm-integrity (or dm-crypt with AEAD encryption stacked oved dm-integrity)
I used small utility, maybe it could be useful https://github.com/mbroz/dm_int_tools

Milan

2019-06-21 07:07:20

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On Fri, 21 Jun 2019 at 09:01, Milan Broz <[email protected]> wrote:
>
> On 20/06/2019 15:52, Ard Biesheuvel wrote:
> >>>> Does this include configurations that combine authenc with essiv?
> >>>
> >>> Hm, seems that we are missing these in luks2-integrity-test. I'll add them there.
> >>>
> >>> I also used this older test
> >>> https://gitlab.com/omos/dm-crypt-test-scripts/blob/master/root/test_dmintegrity.sh
> >>>
> >>> (just aes-gcm-random need to be commented out, we never supported this format, it was
> >>> written for some devel version)
> >>>
> >>> But seems ESSIV is there tested only without AEAD composition...
> >>>
> >>> So yes, this AEAD part need more testing.
> >>
> >> And unfortunately it does not work - it returns EIO on sectors where it should not be data corruption.
> >>
> >> I added few lines with length-preserving mode with ESSIV + AEAD, please could you run luks2-integrity-test
> >> in cryptsetup upstream?
> >>
> >> This patch adds the tests:
> >> https://gitlab.com/cryptsetup/cryptsetup/commit/4c74ff5e5ae328cb61b44bf99f98d08ffee3366a
> >>
> >> It is ok on mainline kernel, fails with the patchset:
> >>
> >> # ./luks2-integrity-test
> >> [aes-cbc-essiv:sha256:hmac-sha256:128:512][FORMAT][ACTIVATE]sha256sum: /dev/mapper/dmi_test: Input/output error
> >> [FAIL]
> >> Expecting ee501705a084cd0ab6f4a28014bcf62b8bfa3434de00b82743c50b3abf06232c got .
> >>
> >> FAILED backtrace:
> >> 77 ./luks2-integrity-test
> >> 112 intformat ./luks2-integrity-test
> >> 127 main ./luks2-integrity-test
> >>
> >
> > OK, I will investigate.
> >
> > I did my testing in a VM using a volume that was created using a
> > distro kernel, and mounted and used it using a kernel with these
> > changes applied.
> >
> > Likewise, if I take a working key.img and mode-test.img, i can mount
> > it and use it on the system running these patches.
> >
> > I noticed that this test uses algif_skcipher not algif_aead when it
> > formats the volume, and so I wonder if the way userland creates the
> > image is affected by this?
>
> Not sure if I understand the question, but I do not think userspace even touch data area here
> (except direct-io wiping after the format, but it does not read it back).
>
> It only encrypts keyslots - and here we cannot use AEAD (in fact it is already
> authenticated by a LUKS digest).
>
> So if the data area uses AEAD (or composition of length-preserving mode and
> some authentication tag like HMAC), we fallback to non-AEAD for keyslot encryption.
>
> In short, to test it, you need to activate device (that works ok with your patches)
> and *access* the data, testing LUKS format and just keyslot access will never use AEAD.
>
> So init the data by direct-io writes, and try to read them back (with dd).
>
> For testing data on dm-integrity (or dm-crypt with AEAD encryption stacked oved dm-integrity)
> I used small utility, maybe it could be useful https://github.com/mbroz/dm_int_tools
>

Thanks.

It appears that my code generates the wrong authentication tags on
encryption, but on decryption it works fine.
I'll keep digging ...

2019-06-21 07:38:03

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] crypto: switch to crypto API for ESSIV generation

On Fri, 21 Jun 2019 at 09:06, Ard Biesheuvel <[email protected]> wrote:
>
> On Fri, 21 Jun 2019 at 09:01, Milan Broz <[email protected]> wrote:
> >
> > On 20/06/2019 15:52, Ard Biesheuvel wrote:
> > >>>> Does this include configurations that combine authenc with essiv?
> > >>>
> > >>> Hm, seems that we are missing these in luks2-integrity-test. I'll add them there.
> > >>>
> > >>> I also used this older test
> > >>> https://gitlab.com/omos/dm-crypt-test-scripts/blob/master/root/test_dmintegrity.sh
> > >>>
> > >>> (just aes-gcm-random need to be commented out, we never supported this format, it was
> > >>> written for some devel version)
> > >>>
> > >>> But seems ESSIV is there tested only without AEAD composition...
> > >>>
> > >>> So yes, this AEAD part need more testing.
> > >>
> > >> And unfortunately it does not work - it returns EIO on sectors where it should not be data corruption.
> > >>
> > >> I added few lines with length-preserving mode with ESSIV + AEAD, please could you run luks2-integrity-test
> > >> in cryptsetup upstream?
> > >>
> > >> This patch adds the tests:
> > >> https://gitlab.com/cryptsetup/cryptsetup/commit/4c74ff5e5ae328cb61b44bf99f98d08ffee3366a
> > >>
> > >> It is ok on mainline kernel, fails with the patchset:
> > >>
> > >> # ./luks2-integrity-test
> > >> [aes-cbc-essiv:sha256:hmac-sha256:128:512][FORMAT][ACTIVATE]sha256sum: /dev/mapper/dmi_test: Input/output error
> > >> [FAIL]
> > >> Expecting ee501705a084cd0ab6f4a28014bcf62b8bfa3434de00b82743c50b3abf06232c got .
> > >>
> > >> FAILED backtrace:
> > >> 77 ./luks2-integrity-test
> > >> 112 intformat ./luks2-integrity-test
> > >> 127 main ./luks2-integrity-test
> > >>
> > >
> > > OK, I will investigate.
> > >
> > > I did my testing in a VM using a volume that was created using a
> > > distro kernel, and mounted and used it using a kernel with these
> > > changes applied.
> > >
> > > Likewise, if I take a working key.img and mode-test.img, i can mount
> > > it and use it on the system running these patches.
> > >
> > > I noticed that this test uses algif_skcipher not algif_aead when it
> > > formats the volume, and so I wonder if the way userland creates the
> > > image is affected by this?
> >
> > Not sure if I understand the question, but I do not think userspace even touch data area here
> > (except direct-io wiping after the format, but it does not read it back).
> >
> > It only encrypts keyslots - and here we cannot use AEAD (in fact it is already
> > authenticated by a LUKS digest).
> >
> > So if the data area uses AEAD (or composition of length-preserving mode and
> > some authentication tag like HMAC), we fallback to non-AEAD for keyslot encryption.
> >
> > In short, to test it, you need to activate device (that works ok with your patches)
> > and *access* the data, testing LUKS format and just keyslot access will never use AEAD.
> >
> > So init the data by direct-io writes, and try to read them back (with dd).
> >
> > For testing data on dm-integrity (or dm-crypt with AEAD encryption stacked oved dm-integrity)
> > I used small utility, maybe it could be useful https://github.com/mbroz/dm_int_tools
> >
>
> Thanks.
>
> It appears that my code generates the wrong authentication tags on
> encryption, but on decryption it works fine.
> I'll keep digging ...

OK, mystery solved.

The skcipher inside authenc() was corrupting the IV before the hmac
got a chance to read it.

I'll send out an updated version of the series.

2019-06-26 04:34:05

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode

On Wed, Jun 19, 2019 at 06:29:21PM +0200, Ard Biesheuvel wrote:
> Add an accelerated version of the 'essiv(cbc(aes),aes,sha256)'
> skcipher, which is used by fscrypt, and in some cases, by dm-crypt.
> This avoids a separate call into the AES cipher for every invocation.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>

This patch causes a self-tests failure:

[ 26.787681] alg: skcipher: essiv-cbc-aes-sha256-neon encryption test failed (wrong result) on test vector 1, cfg="two even aligned splits"

- Eric