2019-06-21 08:09:56

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation

From: Ard Biesheuvel <[email protected]>

This series creates an ESSIV template that produces a skcipher or AEAD
transform based on a tuple of the form '<skcipher>,<cipher>,<shash>'
(or '<aead>,<cipher>,<shash>' for the AEAD case). It exposes the
encapsulated sync or async skcipher/aead by passing through all operations,
while using the cipher/shash pair to transform the input IV into an ESSIV
output IV.

This matches what both users of ESSIV in the kernel do, and so it is proposed
as a replacement for those, in patches #2 and #4.

This code has been tested using the fscrypt test suggested by Eric
(generic/549), as well as the mode-test script suggested by Milan for
the dm-crypt case. I also tested the aead case in a virtual machine,
but it definitely needs some wider testing from the dm-crypt experts.

Open issues (more discussion needed):
- given that hardware already exists that can perform en/decryption including
ESSIV generation of a range of blocks, it would be useful to encapsulate
this in the ESSIV template, and teach at least dm-crypt how to use it
(given that it often processes 8 512-byte sectors at a time)
- given the above, it may or may not make sense to keep the accelerated
implementation in patch #6 (and teach it to increment the LE counter after
each sector)

Changes since v3:
- address various review comments from Eric on patch #1
- use Kconfig's 'imply' instead of 'select' to permit CRYPTO_ESSIV to be
enabled as a module or disabled entirely even if fscrypt is compiled in (#2)
- fix an issue in the AEAD encrypt path caused by the IV being clobbered by
the inner skcipher before the hmac was being calculated

Changes since v2:
- fixed a couple of bugs that snuck in after I'd done the bulk of my
testing
- some cosmetic tweaks to the ESSIV template skcipher setkey function
to align it with the aead one
- add a test case for essiv(cbc(aes),aes,sha256)
- add an accelerated implementation for arm64 that combines the IV
derivation and the actual en/decryption in a single asm routine

Scroll down for tcrypt speed test result comparing the essiv template
with the asm implementation. Bare cbc(aes) tests included for reference
as well. Taken on a 2GHz Cortex-A57 (AMD Seattle)

Code can be found here
https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=essiv-v4

Cc: Herbert Xu <[email protected]>
Cc: Eric Biggers <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Gilad Ben-Yossef <[email protected]>
Cc: Milan Broz <[email protected]>

Ard Biesheuvel (6):
crypto: essiv - create wrapper template for ESSIV generation
fs: crypto: invoke crypto API for ESSIV handling
md: dm-crypt: infer ESSIV block cipher from cipher string directly
md: dm-crypt: switch to ESSIV crypto API template
crypto: essiv - add test vector for essiv(cbc(aes),aes,sha256)
crypto: arm64/aes - implement accelerated ESSIV/CBC mode

arch/arm64/crypto/aes-glue.c | 129 ++++
arch/arm64/crypto/aes-modes.S | 99 +++
crypto/Kconfig | 4 +
crypto/Makefile | 1 +
crypto/essiv.c | 639 ++++++++++++++++++++
crypto/tcrypt.c | 9 +
crypto/testmgr.c | 6 +
crypto/testmgr.h | 208 +++++++
drivers/md/Kconfig | 1 +
drivers/md/dm-crypt.c | 237 ++------
fs/crypto/Kconfig | 1 +
fs/crypto/crypto.c | 5 -
fs/crypto/fscrypt_private.h | 9 -
fs/crypto/keyinfo.c | 88 +--
14 files changed, 1141 insertions(+), 295 deletions(-)
create mode 100644 crypto/essiv.c

--
2.17.1

testing speed of async essiv(cbc(aes),aes,sha256) (essiv(cbc-aes-ce,aes-ce,sha256-ce)) encryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 3140785 ops/s ( 50252560 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 2672908 ops/s (171066112 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 1632811 ops/s (417999616 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 665980 ops/s (681963520 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 495180 ops/s (728904960 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 99329 ops/s (813703168 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 3106888 ops/s ( 49710208 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 2582682 ops/s (165291648 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 1511160 ops/s (386856960 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 589841 ops/s (603997184 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 435094 ops/s (640458368 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 82997 ops/s (679911424 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 3058592 ops/s ( 48937472 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 2496988 ops/s (159807232 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 1438355 ops/s (368218880 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 528902 ops/s (541595648 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 387861 ops/s (570931392 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 75444 ops/s (618037248 bytes)

testing speed of async essiv(cbc(aes),aes,sha256) (essiv(cbc-aes-ce,aes-ce,sha256-ce)) decryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 3164752 ops/s ( 50636032 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 2975874 ops/s ( 190455936 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 2393123 ops/s ( 612639488 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 1314745 ops/s (1346298880 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 1050717 ops/s (1546655424 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 246457 ops/s (2018975744 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 3117489 ops/s ( 49879824 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 2922089 ops/s ( 187013696 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 2292023 ops/s ( 586757888 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 1207942 ops/s (1236932608 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 955598 ops/s (1406640256 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 195198 ops/s (1599062016 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 3081935 ops/s ( 49310960 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 2883181 ops/s ( 184523584 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 2205147 ops/s ( 564517632 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 1119468 ops/s (1146335232 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 877017 ops/s (1290969024 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 195255 ops/s (1599528960 bytes)


testing speed of async essiv(cbc(aes),aes,sha256) (essiv-cbc-aes-sha256-ce) encryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5037539 ops/s ( 80600624 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 3884302 ops/s (248595328 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 2014999 ops/s (515839744 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 721147 ops/s (738454528 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 525262 ops/s (773185664 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 100453 ops/s (822910976 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 4972667 ops/s ( 79562672 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 3721788 ops/s (238194432 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 1835967 ops/s (470007552 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 633524 ops/s (648728576 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 458306 ops/s (674626432 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 83595 ops/s (684810240 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 4975101 ops/s ( 79601616 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 3581137 ops/s (229192768 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 1741799 ops/s (445900544 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 565340 ops/s (578908160 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 407040 ops/s (599162880 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 76092 ops/s (623345664 bytes)

testing speed of async essiv(cbc(aes),aes,sha256) (essiv-cbc-aes-sha256-ce) decryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5122947 ops/s ( 81967152 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 4546576 ops/s ( 290980864 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 3314744 ops/s ( 848574464 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 1550823 ops/s (1588042752 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 1197388 ops/s (1762555136 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 253661 ops/s (2077990912 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 5040644 ops/s ( 80650304 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 4442490 ops/s ( 284319360 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 3138199 ops/s ( 803378944 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 1406038 ops/s (1439782912 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 1075658 ops/s (1583368576 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 199652 ops/s (1635549184 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 4979432 ops/s ( 79670912 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 4394406 ops/s ( 281241984 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 2999511 ops/s ( 767874816 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 1294498 ops/s (1325565952 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 981009 ops/s (1444045248 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 200463 ops/s (1642192896 bytes)

testing speed of async cbc(aes) (cbc-aes-ce) encryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5895884 ops/s ( 94334144 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 4347437 ops/s (278235968 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 2135454 ops/s (546676224 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 736839 ops/s (754523136 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 533261 ops/s (784960192 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 100850 ops/s (826163200 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 5745691 ops/s ( 91931056 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 4113271 ops/s (263249344 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 1932208 ops/s (494645248 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 644555 ops/s (660024320 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 464237 ops/s (683356864 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 84019 ops/s (688283648 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 5620065 ops/s ( 89921040 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 3982991 ops/s (254911424 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 1830587 ops/s (468630272 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 576151 ops/s (589978624 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 412487 ops/s (607180864 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 76378 ops/s (625688576 bytes)

testing speed of async cbc(aes) (cbc-aes-ce) decryption
tcrypt: test 0 (128 bit key, 16 byte blocks): 5821314 ops/s ( 93141024 bytes)
tcrypt: test 1 (128 bit key, 64 byte blocks): 5248040 ops/s ( 335874560 bytes)
tcrypt: test 2 (128 bit key, 256 byte blocks): 3677701 ops/s ( 941491456 bytes)
tcrypt: test 3 (128 bit key, 1024 byte blocks): 1650808 ops/s (1690427392 bytes)
tcrypt: test 4 (128 bit key, 1472 byte blocks): 1256545 ops/s (1849634240 bytes)
tcrypt: test 5 (128 bit key, 8192 byte blocks): 257922 ops/s (2112897024 bytes)
tcrypt: test 6 (192 bit key, 16 byte blocks): 5690108 ops/s ( 91041728 bytes)
tcrypt: test 7 (192 bit key, 64 byte blocks): 5086441 ops/s ( 325532224 bytes)
tcrypt: test 8 (192 bit key, 256 byte blocks): 3447562 ops/s ( 882575872 bytes)
tcrypt: test 9 (192 bit key, 1024 byte blocks): 1490136 ops/s (1525899264 bytes)
tcrypt: test 10 (192 bit key, 1472 byte blocks): 1124620 ops/s (1655440640 bytes)
tcrypt: test 11 (192 bit key, 8192 byte blocks): 201222 ops/s (1648410624 bytes)
tcrypt: test 12 (256 bit key, 16 byte blocks): 5567247 ops/s ( 89075952 bytes)
tcrypt: test 13 (256 bit key, 64 byte blocks): 5050010 ops/s ( 323200640 bytes)
tcrypt: test 14 (256 bit key, 256 byte blocks): 3290422 ops/s ( 842348032 bytes)
tcrypt: test 15 (256 bit key, 1024 byte blocks): 1359439 ops/s (1392065536 bytes)
tcrypt: test 16 (256 bit key, 1472 byte blocks): 1017751 ops/s (1498129472 bytes)
tcrypt: test 17 (256 bit key, 8192 byte blocks): 201492 ops/s (1650622464 bytes)


2019-06-21 08:09:56

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v4 4/6] md: dm-crypt: switch to ESSIV crypto API template

From: Ard Biesheuvel <[email protected]>

Replace the explicit ESSIV handling in the dm-crypt driver with calls
into the crypto API, which now possesses the capability to perform
this processing within the crypto subsystem.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
drivers/md/Kconfig | 1 +
drivers/md/dm-crypt.c | 208 +++-----------------
2 files changed, 31 insertions(+), 178 deletions(-)

diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 45254b3ef715..30ca87cf25db 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -271,6 +271,7 @@ config DM_CRYPT
depends on BLK_DEV_DM
select CRYPTO
select CRYPTO_CBC
+ select CRYPTO_ESSIV
---help---
This device-mapper target allows you to create a device that
transparently encrypts the data on it. You'll need to activate
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index f001f1104cb5..12d28880ec34 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -98,11 +98,6 @@ struct crypt_iv_operations {
struct dm_crypt_request *dmreq);
};

-struct iv_essiv_private {
- struct crypto_shash *hash_tfm;
- u8 *salt;
-};
-
struct iv_benbi_private {
int shift;
};
@@ -155,7 +150,6 @@ struct crypt_config {

const struct crypt_iv_operations *iv_gen_ops;
union {
- struct iv_essiv_private essiv;
struct iv_benbi_private benbi;
struct iv_lmk_private lmk;
struct iv_tcw_private tcw;
@@ -165,8 +159,6 @@ struct crypt_config {
unsigned short int sector_size;
unsigned char sector_shift;

- /* ESSIV: struct crypto_cipher *essiv_tfm */
- void *iv_private;
union {
struct crypto_skcipher **tfms;
struct crypto_aead **tfms_aead;
@@ -323,161 +315,6 @@ static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv,
return 0;
}

-/* Initialise ESSIV - compute salt but no local memory allocations */
-static int crypt_iv_essiv_init(struct crypt_config *cc)
-{
- struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
- SHASH_DESC_ON_STACK(desc, essiv->hash_tfm);
- struct crypto_cipher *essiv_tfm;
- int err;
-
- desc->tfm = essiv->hash_tfm;
-
- err = crypto_shash_digest(desc, cc->key, cc->key_size, essiv->salt);
- shash_desc_zero(desc);
- if (err)
- return err;
-
- essiv_tfm = cc->iv_private;
-
- err = crypto_cipher_setkey(essiv_tfm, essiv->salt,
- crypto_shash_digestsize(essiv->hash_tfm));
- if (err)
- return err;
-
- return 0;
-}
-
-/* Wipe salt and reset key derived from volume key */
-static int crypt_iv_essiv_wipe(struct crypt_config *cc)
-{
- struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
- unsigned salt_size = crypto_shash_digestsize(essiv->hash_tfm);
- struct crypto_cipher *essiv_tfm;
- int r, err = 0;
-
- memset(essiv->salt, 0, salt_size);
-
- essiv_tfm = cc->iv_private;
- r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
- if (r)
- err = r;
-
- return err;
-}
-
-/* Allocate the cipher for ESSIV */
-static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc,
- struct dm_target *ti,
- const u8 *salt,
- unsigned int saltsize)
-{
- struct crypto_cipher *essiv_tfm;
- int err;
-
- /* Setup the essiv_tfm with the given salt */
- essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, 0);
- if (IS_ERR(essiv_tfm)) {
- ti->error = "Error allocating crypto tfm for ESSIV";
- return essiv_tfm;
- }
-
- if (crypto_cipher_blocksize(essiv_tfm) != cc->iv_size) {
- ti->error = "Block size of ESSIV cipher does "
- "not match IV size of block cipher";
- crypto_free_cipher(essiv_tfm);
- return ERR_PTR(-EINVAL);
- }
-
- err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
- if (err) {
- ti->error = "Failed to set key for ESSIV cipher";
- crypto_free_cipher(essiv_tfm);
- return ERR_PTR(err);
- }
-
- return essiv_tfm;
-}
-
-static void crypt_iv_essiv_dtr(struct crypt_config *cc)
-{
- struct crypto_cipher *essiv_tfm;
- struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-
- crypto_free_shash(essiv->hash_tfm);
- essiv->hash_tfm = NULL;
-
- kzfree(essiv->salt);
- essiv->salt = NULL;
-
- essiv_tfm = cc->iv_private;
-
- if (essiv_tfm)
- crypto_free_cipher(essiv_tfm);
-
- cc->iv_private = NULL;
-}
-
-static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
- const char *opts)
-{
- struct crypto_cipher *essiv_tfm = NULL;
- struct crypto_shash *hash_tfm = NULL;
- u8 *salt = NULL;
- int err;
-
- if (!opts) {
- ti->error = "Digest algorithm missing for ESSIV mode";
- return -EINVAL;
- }
-
- /* Allocate hash algorithm */
- hash_tfm = crypto_alloc_shash(opts, 0, 0);
- if (IS_ERR(hash_tfm)) {
- ti->error = "Error initializing ESSIV hash";
- err = PTR_ERR(hash_tfm);
- goto bad;
- }
-
- salt = kzalloc(crypto_shash_digestsize(hash_tfm), GFP_KERNEL);
- if (!salt) {
- ti->error = "Error kmallocing salt storage in ESSIV";
- err = -ENOMEM;
- goto bad;
- }
-
- cc->iv_gen_private.essiv.salt = salt;
- cc->iv_gen_private.essiv.hash_tfm = hash_tfm;
-
- essiv_tfm = alloc_essiv_cipher(cc, ti, salt,
- crypto_shash_digestsize(hash_tfm));
- if (IS_ERR(essiv_tfm)) {
- crypt_iv_essiv_dtr(cc);
- return PTR_ERR(essiv_tfm);
- }
- cc->iv_private = essiv_tfm;
-
- return 0;
-
-bad:
- if (hash_tfm && !IS_ERR(hash_tfm))
- crypto_free_shash(hash_tfm);
- kfree(salt);
- return err;
-}
-
-static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv,
- struct dm_crypt_request *dmreq)
-{
- struct crypto_cipher *essiv_tfm = cc->iv_private;
-
- memset(iv, 0, cc->iv_size);
- *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
- crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
-
- return 0;
-}
-
static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti,
const char *opts)
{
@@ -853,14 +690,6 @@ static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
.generator = crypt_iv_plain64be_gen
};

-static const struct crypt_iv_operations crypt_iv_essiv_ops = {
- .ctr = crypt_iv_essiv_ctr,
- .dtr = crypt_iv_essiv_dtr,
- .init = crypt_iv_essiv_init,
- .wipe = crypt_iv_essiv_wipe,
- .generator = crypt_iv_essiv_gen
-};
-
static const struct crypt_iv_operations crypt_iv_benbi_ops = {
.ctr = crypt_iv_benbi_ctr,
.dtr = crypt_iv_benbi_dtr,
@@ -2283,7 +2112,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
else if (strcmp(ivmode, "plain64be") == 0)
cc->iv_gen_ops = &crypt_iv_plain64be_ops;
else if (strcmp(ivmode, "essiv") == 0)
- cc->iv_gen_ops = &crypt_iv_essiv_ops;
+ cc->iv_gen_ops = &crypt_iv_plain64_ops;
else if (strcmp(ivmode, "benbi") == 0)
cc->iv_gen_ops = &crypt_iv_benbi_ops;
else if (strcmp(ivmode, "null") == 0)
@@ -2397,7 +2226,7 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
char **ivmode, char **ivopts)
{
struct crypt_config *cc = ti->private;
- char *tmp, *cipher_api;
+ char *tmp, *cipher_api, buf[CRYPTO_MAX_ALG_NAME];
int ret = -EINVAL;

cc->tfms_count = 1;
@@ -2435,9 +2264,19 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
}

ret = crypt_ctr_blkdev_cipher(cc, cipher_api);
- if (ret < 0) {
- ti->error = "Cannot allocate cipher string";
- return -ENOMEM;
+ if (ret < 0)
+ goto bad_mem;
+
+ if (*ivmode && !strcmp(*ivmode, "essiv")) {
+ if (!*ivopts) {
+ ti->error = "Digest algorithm missing for ESSIV mode";
+ return -EINVAL;
+ }
+ ret = snprintf(buf, CRYPTO_MAX_ALG_NAME, "essiv(%s,%s,%s)",
+ cipher_api, cc->cipher, *ivopts);
+ if (ret < 0)
+ goto bad_mem;
+ cipher_api = buf;
}

cc->key_parts = cc->tfms_count;
@@ -2456,6 +2295,9 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc));

return 0;
+bad_mem:
+ ti->error = "Cannot allocate cipher string";
+ return -ENOMEM;
}

static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key,
@@ -2515,8 +2357,18 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
if (!cipher_api)
goto bad_mem;

- ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
- "%s(%s)", chainmode, cipher);
+ if (*ivmode && !strcmp(*ivmode, "essiv")) {
+ if (!*ivopts) {
+ ti->error = "Digest algorithm missing for ESSIV mode";
+ return -EINVAL;
+ }
+ ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+ "essiv(%s(%s),%s,%s)", chainmode, cipher,
+ cipher, *ivopts);
+ } else {
+ ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+ "%s(%s)", chainmode, cipher);
+ }
if (ret < 0) {
kfree(cipher_api);
goto bad_mem;
--
2.17.1

2019-06-21 08:09:56

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v4 2/6] fs: crypto: invoke crypto API for ESSIV handling

From: Ard Biesheuvel <[email protected]>

Instead of open coding the calculations for ESSIV handling, use a
ESSIV skcipher which does all of this under the hood.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
fs/crypto/Kconfig | 1 +
fs/crypto/crypto.c | 5 --
fs/crypto/fscrypt_private.h | 9 --
fs/crypto/keyinfo.c | 88 +-------------------
4 files changed, 3 insertions(+), 100 deletions(-)

diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
index 24ed99e2eca0..b0292da8613c 100644
--- a/fs/crypto/Kconfig
+++ b/fs/crypto/Kconfig
@@ -5,6 +5,7 @@ config FS_ENCRYPTION
select CRYPTO_AES
select CRYPTO_CBC
select CRYPTO_ECB
+ imply CRYPTO_ESSIV
select CRYPTO_XTS
select CRYPTO_CTS
select CRYPTO_SHA256
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index 335a362ee446..c53ce262a06c 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -136,9 +136,6 @@ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,

if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY)
memcpy(iv->nonce, ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE);
-
- if (ci->ci_essiv_tfm != NULL)
- crypto_cipher_encrypt_one(ci->ci_essiv_tfm, iv->raw, iv->raw);
}

int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw,
@@ -492,8 +489,6 @@ static void __exit fscrypt_exit(void)
destroy_workqueue(fscrypt_read_workqueue);
kmem_cache_destroy(fscrypt_ctx_cachep);
kmem_cache_destroy(fscrypt_info_cachep);
-
- fscrypt_essiv_cleanup();
}
module_exit(fscrypt_exit);

diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 7da276159593..59d0cba9cfb9 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -61,12 +61,6 @@ struct fscrypt_info {
/* The actual crypto transform used for encryption and decryption */
struct crypto_skcipher *ci_ctfm;

- /*
- * Cipher for ESSIV IV generation. Only set for CBC contents
- * encryption, otherwise is NULL.
- */
- struct crypto_cipher *ci_essiv_tfm;
-
/*
* Encryption mode used for this inode. It corresponds to either
* ci_data_mode or ci_filename_mode, depending on the inode type.
@@ -166,9 +160,6 @@ struct fscrypt_mode {
int keysize;
int ivsize;
bool logged_impl_name;
- bool needs_essiv;
};

-extern void __exit fscrypt_essiv_cleanup(void);
-
#endif /* _FSCRYPT_PRIVATE_H */
diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
index dcd91a3fbe49..82c7eb86ca00 100644
--- a/fs/crypto/keyinfo.c
+++ b/fs/crypto/keyinfo.c
@@ -19,8 +19,6 @@
#include <crypto/skcipher.h>
#include "fscrypt_private.h"

-static struct crypto_shash *essiv_hash_tfm;
-
/* Table of keys referenced by FS_POLICY_FLAG_DIRECT_KEY policies */
static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
@@ -144,10 +142,9 @@ static struct fscrypt_mode available_modes[] = {
},
[FS_ENCRYPTION_MODE_AES_128_CBC] = {
.friendly_name = "AES-128-CBC",
- .cipher_str = "cbc(aes)",
+ .cipher_str = "essiv(cbc(aes),aes,sha256)",
.keysize = 16,
- .ivsize = 16,
- .needs_essiv = true,
+ .ivsize = 8,
},
[FS_ENCRYPTION_MODE_AES_128_CTS] = {
.friendly_name = "AES-128-CTS-CBC",
@@ -377,72 +374,6 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
return ERR_PTR(err);
}

-static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt)
-{
- struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm);
-
- /* init hash transform on demand */
- if (unlikely(!tfm)) {
- struct crypto_shash *prev_tfm;
-
- tfm = crypto_alloc_shash("sha256", 0, 0);
- if (IS_ERR(tfm)) {
- fscrypt_warn(NULL,
- "error allocating SHA-256 transform: %ld",
- PTR_ERR(tfm));
- return PTR_ERR(tfm);
- }
- prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm);
- if (prev_tfm) {
- crypto_free_shash(tfm);
- tfm = prev_tfm;
- }
- }
-
- {
- SHASH_DESC_ON_STACK(desc, tfm);
- desc->tfm = tfm;
-
- return crypto_shash_digest(desc, key, keysize, salt);
- }
-}
-
-static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key,
- int keysize)
-{
- int err;
- struct crypto_cipher *essiv_tfm;
- u8 salt[SHA256_DIGEST_SIZE];
-
- essiv_tfm = crypto_alloc_cipher("aes", 0, 0);
- if (IS_ERR(essiv_tfm))
- return PTR_ERR(essiv_tfm);
-
- ci->ci_essiv_tfm = essiv_tfm;
-
- err = derive_essiv_salt(raw_key, keysize, salt);
- if (err)
- goto out;
-
- /*
- * Using SHA256 to derive the salt/key will result in AES-256 being
- * used for IV generation. File contents encryption will still use the
- * configured keysize (AES-128) nevertheless.
- */
- err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt));
- if (err)
- goto out;
-
-out:
- memzero_explicit(salt, sizeof(salt));
- return err;
-}
-
-void __exit fscrypt_essiv_cleanup(void)
-{
- crypto_free_shash(essiv_hash_tfm);
-}
-
/*
* Given the encryption mode and key (normally the derived key, but for
* FS_POLICY_FLAG_DIRECT_KEY mode it's the master key), set up the inode's
@@ -454,7 +385,6 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
{
struct fscrypt_master_key *mk;
struct crypto_skcipher *ctfm;
- int err;

if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
@@ -470,19 +400,6 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
ci->ci_master_key = mk;
ci->ci_ctfm = ctfm;

- if (mode->needs_essiv) {
- /* ESSIV implies 16-byte IVs which implies !DIRECT_KEY */
- WARN_ON(mode->ivsize != AES_BLOCK_SIZE);
- WARN_ON(ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY);
-
- err = init_essiv_generator(ci, raw_key, mode->keysize);
- if (err) {
- fscrypt_warn(inode->i_sb,
- "error initializing ESSIV generator for inode %lu: %d",
- inode->i_ino, err);
- return err;
- }
- }
return 0;
}

@@ -495,7 +412,6 @@ static void put_crypt_info(struct fscrypt_info *ci)
put_master_key(ci->ci_master_key);
} else {
crypto_free_skcipher(ci->ci_ctfm);
- crypto_free_cipher(ci->ci_essiv_tfm);
}
kmem_cache_free(fscrypt_info_cachep, ci);
}
--
2.17.1

2019-06-21 08:09:56

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v4 1/6] crypto: essiv - create wrapper template for ESSIV generation

From: Ard Biesheuvel <[email protected]>

Implement a template that wraps a (skcipher,cipher,shash) or
(aead,cipher,shash) tuple so that we can consolidate the ESSIV handling
in fscrypt and dm-crypt and move it into the crypto API. This will result
in better test coverage, and will allow future changes to make the bare
cipher interface internal to the crypto subsystem, in order to increase
robustness of the API against misuse.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/Kconfig | 4 +
crypto/Makefile | 1 +
crypto/essiv.c | 639 ++++++++++++++++++++
3 files changed, 644 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 3d056e7da65f..1aa47087c1a2 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1917,6 +1917,10 @@ config CRYPTO_STATS
config CRYPTO_HASH_INFO
bool

+config CRYPTO_ESSIV
+ tristate
+ select CRYPTO_AUTHENC
+
source "drivers/crypto/Kconfig"
source "crypto/asymmetric_keys/Kconfig"
source "certs/Kconfig"
diff --git a/crypto/Makefile b/crypto/Makefile
index 266a4cdbb9e2..ad1d99ba6d56 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -148,6 +148,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o
obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o
obj-$(CONFIG_CRYPTO_OFB) += ofb.o
obj-$(CONFIG_CRYPTO_ECC) += ecc.o
+obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o

ecdh_generic-y += ecdh.o
ecdh_generic-y += ecdh_helper.o
diff --git a/crypto/essiv.c b/crypto/essiv.c
new file mode 100644
index 000000000000..8e80814ec7d6
--- /dev/null
+++ b/crypto/essiv.c
@@ -0,0 +1,639 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ESSIV skcipher and aead template for block encryption
+ *
+ * This template encapsulates the ESSIV IV generation algorithm used by
+ * dm-crypt and fscrypt, which converts a 64-bit sector number in little
+ * endian representation into an initial vector for the skcipher used for
+ * block encryption, by encrypting it using the hash of the skcipher key
+ * as encryption key.
+ *
+ * The typical use of this template is to instantiate the skcipher
+ * 'essiv(cbc(aes),aes,sha256)', which is the only instantiation used by
+ * fscrypt, and the most relevant one for dm-crypt. However, dm-crypt
+ * also permits ESSIV to be used in combination with the authenc template,
+ * e.g., 'essiv(authenc(hmac(sha256),cbc(aes)),aes,sha256)', in which case
+ * we need to instantiate an aead that accepts the same special key format
+ * as the authenc template, and deals with the way the encrypted IV is
+ * embedded into the AAD area of the aead request. This means the AEAD
+ * flavor produced by this template is tightly coupled to the way dm-crypt
+ * happens to use it.
+ *
+ * Copyright (c) 2019 Linaro, Ltd. <[email protected]>
+ *
+ * Heavily based on:
+ * adiantum length-preserving encryption mode
+ *
+ * Copyright 2018 Google LLC
+ */
+
+#include <crypto/authenc.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+
+#include "internal.h"
+
+#define ESSIV_IV_SIZE sizeof(__le64) // IV size of the outer algo
+#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo
+
+struct essiv_instance_ctx {
+ union {
+ struct crypto_skcipher_spawn skcipher_spawn;
+ struct crypto_aead_spawn aead_spawn;
+ } u;
+ struct crypto_spawn essiv_cipher_spawn;
+ struct crypto_shash_spawn hash_spawn;
+};
+
+struct essiv_tfm_ctx {
+ union {
+ struct crypto_skcipher *skcipher;
+ struct crypto_aead *aead;
+ } u;
+ struct crypto_cipher *essiv_cipher;
+ struct crypto_shash *hash;
+};
+
+struct essiv_skcipher_request_ctx {
+ u8 iv[MAX_INNER_IV_SIZE];
+ struct skcipher_request skcipher_req;
+};
+
+struct essiv_aead_request_ctx {
+ u8 iv[2][MAX_INNER_IV_SIZE];
+ struct scatterlist src[4], dst[4];
+ struct aead_request aead_req;
+};
+
+static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ SHASH_DESC_ON_STACK(desc, tctx->hash);
+ u8 salt[HASH_MAX_DIGESTSIZE];
+ int err;
+
+ crypto_skcipher_clear_flags(tctx->u.skcipher, CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(tctx->u.skcipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_skcipher_setkey(tctx->u.skcipher, key, keylen);
+ crypto_skcipher_set_flags(tfm,
+ crypto_skcipher_get_flags(tctx->u.skcipher) &
+ CRYPTO_TFM_RES_MASK);
+ if (err)
+ return err;
+
+ desc->tfm = tctx->hash;
+ err = crypto_shash_digest(desc, key, keylen, salt);
+ if (err)
+ return err;
+
+ crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(tctx->essiv_cipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(tctx->essiv_cipher, salt,
+ crypto_shash_digestsize(tctx->hash));
+ crypto_skcipher_set_flags(tfm,
+ crypto_cipher_get_flags(tctx->essiv_cipher) &
+ CRYPTO_TFM_RES_MASK);
+
+ return err;
+}
+
+static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+ SHASH_DESC_ON_STACK(desc, tctx->hash);
+ struct crypto_authenc_keys keys;
+ u8 salt[HASH_MAX_DIGESTSIZE];
+ int err;
+
+ crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK);
+ crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_aead_setkey(tctx->u.aead, key, keylen);
+ crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) &
+ CRYPTO_TFM_RES_MASK);
+ if (err)
+ return err;
+
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) {
+ crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ return -EINVAL;
+ }
+
+ desc->tfm = tctx->hash;
+ err = crypto_shash_init(desc) ?:
+ crypto_shash_update(desc, keys.enckey, keys.enckeylen) ?:
+ crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt);
+ if (err)
+ return err;
+
+ crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(tctx->essiv_cipher, salt,
+ crypto_shash_digestsize(tctx->hash));
+ crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) &
+ CRYPTO_TFM_RES_MASK);
+
+ return err;
+}
+
+static int essiv_aead_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+
+ return crypto_aead_setauthsize(tctx->u.aead, authsize);
+}
+
+static void essiv_skcipher_done(struct crypto_async_request *areq, int err)
+{
+ struct skcipher_request *req = areq->data;
+
+ skcipher_request_complete(req, err);
+}
+
+static void essiv_skcipher_prepare_subreq(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+ struct skcipher_request *subreq = &rctx->skcipher_req;
+
+ memset(rctx->iv, 0, crypto_cipher_blocksize(tctx->essiv_cipher));
+ memcpy(rctx->iv, req->iv, crypto_skcipher_ivsize(tfm));
+
+ crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv);
+
+ skcipher_request_set_tfm(subreq, tctx->u.skcipher);
+ skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+ rctx->iv);
+ skcipher_request_set_callback(subreq, skcipher_request_flags(req),
+ essiv_skcipher_done, req);
+}
+
+static int essiv_skcipher_encrypt(struct skcipher_request *req)
+{
+ struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+
+ essiv_skcipher_prepare_subreq(req);
+ return crypto_skcipher_encrypt(&rctx->skcipher_req);
+}
+
+static int essiv_skcipher_decrypt(struct skcipher_request *req)
+{
+ struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+
+ essiv_skcipher_prepare_subreq(req);
+ return crypto_skcipher_decrypt(&rctx->skcipher_req);
+}
+
+static void essiv_aead_done(struct crypto_async_request *areq, int err)
+{
+ struct aead_request *req = areq->data;
+
+ aead_request_complete(req, err);
+}
+
+static int essiv_aead_prepare_subreq(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+ int ivsize = crypto_cipher_blocksize(tctx->essiv_cipher);
+ int ssize = req->assoclen - crypto_aead_ivsize(tfm);
+ struct aead_request *subreq = &rctx->aead_req;
+ struct scatterlist *sg;
+
+ /*
+ * dm-crypt embeds the sector number and the IV in the AAD region so we
+ * have to splice the converted IV into the subrequest that we pass on
+ * to the AEAD transform. This means we are tightly coupled to dm-crypt,
+ * but that should be the only user of this code in AEAD mode.
+ */
+ if (ssize < 0 || sg_nents_for_len(req->src, ssize) != 1)
+ return -EINVAL;
+
+ memset(rctx->iv[0], 0, ivsize);
+ memcpy(rctx->iv[0], req->iv, crypto_aead_ivsize(tfm));
+
+ crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv[0], rctx->iv[0]);
+
+ sg_init_table(rctx->src, 4);
+ sg_set_page(rctx->src, sg_page(req->src), ssize, req->src->offset);
+ sg_set_buf(rctx->src + 1, rctx->iv[0], ivsize);
+ sg = scatterwalk_ffwd(rctx->src + 2, req->src, req->assoclen);
+ if (sg != rctx->src + 2)
+ sg_chain(rctx->src, 3, sg);
+
+ sg_init_table(rctx->dst, 4);
+ sg_set_page(rctx->dst, sg_page(req->dst), ssize, req->dst->offset);
+ sg_set_buf(rctx->dst + 1, rctx->iv[1], ivsize);
+ sg = scatterwalk_ffwd(rctx->dst + 2, req->dst, req->assoclen);
+ if (sg != rctx->dst + 2)
+ sg_chain(rctx->dst, 3, sg);
+
+ aead_request_set_tfm(subreq, tctx->u.aead);
+ aead_request_set_crypt(subreq, rctx->src, rctx->dst, req->cryptlen,
+ rctx->iv[0]);
+ aead_request_set_ad(subreq, ssize + ivsize);
+ aead_request_set_callback(subreq, aead_request_flags(req),
+ essiv_aead_done, req);
+
+ return 0;
+}
+
+static int essiv_aead_encrypt(struct aead_request *req)
+{
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+
+ return essiv_aead_prepare_subreq(req) ?:
+ crypto_aead_encrypt(&rctx->aead_req);
+}
+
+static int essiv_aead_decrypt(struct aead_request *req)
+{
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+
+ return essiv_aead_prepare_subreq(req) ?:
+ crypto_aead_decrypt(&rctx->aead_req);
+}
+
+static int essiv_init_tfm(struct essiv_instance_ctx *ictx,
+ struct essiv_tfm_ctx *tctx)
+{
+ struct crypto_cipher *essiv_cipher;
+ struct crypto_shash *hash;
+ int err;
+
+ essiv_cipher = crypto_spawn_cipher(&ictx->essiv_cipher_spawn);
+ if (IS_ERR(essiv_cipher))
+ return PTR_ERR(essiv_cipher);
+
+ hash = crypto_spawn_shash(&ictx->hash_spawn);
+ if (IS_ERR(hash)) {
+ err = PTR_ERR(hash);
+ goto err_free_essiv_cipher;
+ }
+
+ tctx->essiv_cipher = essiv_cipher;
+ tctx->hash = hash;
+
+ return 0;
+
+err_free_essiv_cipher:
+ crypto_free_cipher(essiv_cipher);
+ return err;
+}
+
+static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+ struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+ struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct crypto_skcipher *skcipher;
+ unsigned int subreq_size;
+ int err;
+
+ BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx,
+ skcipher_req) !=
+ sizeof(struct essiv_skcipher_request_ctx));
+
+ skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn);
+ if (IS_ERR(skcipher))
+ return PTR_ERR(skcipher);
+
+ subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx,
+ skcipher_req) +
+ crypto_skcipher_reqsize(skcipher);
+
+ crypto_skcipher_set_reqsize(tfm,
+ offsetof(struct essiv_skcipher_request_ctx,
+ skcipher_req) + subreq_size);
+
+ err = essiv_init_tfm(ictx, tctx);
+ if (err) {
+ crypto_free_skcipher(skcipher);
+ return err;
+ }
+
+ tctx->u.skcipher = skcipher;
+ return 0;
+}
+
+static int essiv_aead_init_tfm(struct crypto_aead *tfm)
+{
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+ struct crypto_aead *aead;
+ unsigned int subreq_size;
+ int err;
+
+ BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) !=
+ sizeof(struct essiv_aead_request_ctx));
+
+ aead = crypto_spawn_aead(&ictx->u.aead_spawn);
+ if (IS_ERR(aead))
+ return PTR_ERR(aead);
+
+ subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) +
+ crypto_aead_reqsize(aead);
+
+ crypto_aead_set_reqsize(tfm, offsetof(struct essiv_aead_request_ctx,
+ aead_req) + subreq_size);
+
+ err = essiv_init_tfm(ictx, tctx);
+ if (err) {
+ crypto_free_aead(aead);
+ return err;
+ }
+
+ tctx->u.aead = aead;
+ return 0;
+}
+
+static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_skcipher(tctx->u.skcipher);
+ crypto_free_cipher(tctx->essiv_cipher);
+ crypto_free_shash(tctx->hash);
+}
+
+static void essiv_aead_exit_tfm(struct crypto_aead *tfm)
+{
+ struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+
+ crypto_free_aead(tctx->u.aead);
+ crypto_free_cipher(tctx->essiv_cipher);
+ crypto_free_shash(tctx->hash);
+}
+
+static void essiv_skcipher_free_instance(struct skcipher_instance *inst)
+{
+ struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+
+ crypto_drop_skcipher(&ictx->u.skcipher_spawn);
+ crypto_drop_spawn(&ictx->essiv_cipher_spawn);
+ crypto_drop_shash(&ictx->hash_spawn);
+ kfree(inst);
+}
+
+static void essiv_aead_free_instance(struct aead_instance *inst)
+{
+ struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
+
+ crypto_drop_aead(&ictx->u.aead_spawn);
+ crypto_drop_spawn(&ictx->essiv_cipher_spawn);
+ crypto_drop_shash(&ictx->hash_spawn);
+ kfree(inst);
+}
+
+static bool essiv_supported_algorithms(struct crypto_alg *essiv_cipher_alg,
+ struct shash_alg *hash_alg,
+ int ivsize)
+{
+ if (hash_alg->digestsize < essiv_cipher_alg->cra_cipher.cia_min_keysize ||
+ hash_alg->digestsize > essiv_cipher_alg->cra_cipher.cia_max_keysize)
+ return false;
+
+ if (ivsize != essiv_cipher_alg->cra_blocksize)
+ return false;
+
+ if (ivsize > MAX_INNER_IV_SIZE)
+ return false;
+
+ if (crypto_shash_alg_has_setkey(hash_alg))
+ return false;
+
+ return true;
+}
+
+static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ struct crypto_attr_type *algt;
+ const char *inner_cipher_name;
+ const char *essiv_cipher_name;
+ const char *shash_name;
+ struct skcipher_instance *skcipher_inst = NULL;
+ struct aead_instance *aead_inst = NULL;
+ struct crypto_instance *inst;
+ struct crypto_alg *base, *block_base;
+ struct essiv_instance_ctx *ictx;
+ struct skcipher_alg *skcipher_alg = NULL;
+ struct aead_alg *aead_alg = NULL;
+ struct crypto_alg *essiv_cipher_alg;
+ struct crypto_alg *_hash_alg;
+ struct shash_alg *hash_alg;
+ int ivsize;
+ u32 type;
+ int err;
+
+ algt = crypto_get_attr_type(tb);
+ if (IS_ERR(algt))
+ return PTR_ERR(algt);
+
+ inner_cipher_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(inner_cipher_name))
+ return PTR_ERR(inner_cipher_name);
+
+ essiv_cipher_name = crypto_attr_alg_name(tb[2]);
+ if (IS_ERR(essiv_cipher_name))
+ return PTR_ERR(essiv_cipher_name);
+
+ shash_name = crypto_attr_alg_name(tb[3]);
+ if (IS_ERR(shash_name))
+ return PTR_ERR(shash_name);
+
+ type = algt->type & algt->mask;
+
+ switch (type) {
+ case CRYPTO_ALG_TYPE_BLKCIPHER:
+ skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
+ sizeof(*ictx), GFP_KERNEL);
+ if (!skcipher_inst)
+ return -ENOMEM;
+ inst = skcipher_crypto_instance(skcipher_inst);
+ base = &skcipher_inst->alg.base;
+ ictx = crypto_instance_ctx(inst);
+
+ /* Block cipher, e.g. "cbc(aes)" */
+ crypto_set_skcipher_spawn(&ictx->u.skcipher_spawn, inst);
+ err = crypto_grab_skcipher(&ictx->u.skcipher_spawn,
+ inner_cipher_name, 0,
+ crypto_requires_sync(algt->type,
+ algt->mask));
+ if (err)
+ goto out_free_inst;
+ skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn);
+ block_base = &skcipher_alg->base;
+ ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
+ break;
+
+ case CRYPTO_ALG_TYPE_AEAD:
+ aead_inst = kzalloc(sizeof(*aead_inst) +
+ sizeof(*ictx), GFP_KERNEL);
+ if (!aead_inst)
+ return -ENOMEM;
+ inst = aead_crypto_instance(aead_inst);
+ base = &aead_inst->alg.base;
+ ictx = crypto_instance_ctx(inst);
+
+ /* AEAD cipher, e.g. "authenc(hmac(sha256),cbc(aes))" */
+ crypto_set_aead_spawn(&ictx->u.aead_spawn, inst);
+ err = crypto_grab_aead(&ictx->u.aead_spawn,
+ inner_cipher_name, 0,
+ crypto_requires_sync(algt->type,
+ algt->mask));
+ if (err)
+ goto out_free_inst;
+ aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn);
+ block_base = &aead_alg->base;
+ ivsize = aead_alg->ivsize;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ /* Block cipher, e.g. "aes" */
+ crypto_set_spawn(&ictx->essiv_cipher_spawn, inst);
+ err = crypto_grab_spawn(&ictx->essiv_cipher_spawn, essiv_cipher_name,
+ CRYPTO_ALG_TYPE_CIPHER, CRYPTO_ALG_TYPE_MASK);
+ if (err)
+ goto out_drop_skcipher;
+ essiv_cipher_alg = ictx->essiv_cipher_spawn.alg;
+
+ /* Synchronous hash, e.g., "sha256" */
+ _hash_alg = crypto_alg_mod_lookup(shash_name,
+ CRYPTO_ALG_TYPE_SHASH,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(_hash_alg)) {
+ err = PTR_ERR(_hash_alg);
+ goto out_drop_essiv_cipher;
+ }
+ hash_alg = __crypto_shash_alg(_hash_alg);
+ err = crypto_init_shash_spawn(&ictx->hash_spawn, hash_alg, inst);
+ if (err)
+ goto out_put_hash;
+
+ /* Check the set of algorithms */
+ if (!essiv_supported_algorithms(essiv_cipher_alg, hash_alg, ivsize)) {
+ pr_warn("Unsupported essiv instantiation: essiv(%s,%s,%s)\n",
+ block_base->cra_name,
+ essiv_cipher_alg->cra_name,
+ hash_alg->base.cra_name);
+ err = -EINVAL;
+ goto out_drop_hash;
+ }
+
+ /* Instance fields */
+
+ err = -ENAMETOOLONG;
+ if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME,
+ "essiv(%s,%s,%s)", block_base->cra_name,
+ essiv_cipher_alg->cra_name,
+ hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
+ goto out_drop_hash;
+ if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "essiv(%s,%s,%s)",
+ block_base->cra_driver_name,
+ essiv_cipher_alg->cra_driver_name,
+ hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ goto out_drop_hash;
+
+ base->cra_flags = block_base->cra_flags & CRYPTO_ALG_ASYNC;
+ base->cra_blocksize = block_base->cra_blocksize;
+ base->cra_ctxsize = sizeof(struct essiv_tfm_ctx);
+ base->cra_alignmask = block_base->cra_alignmask;
+ base->cra_priority = block_base->cra_priority;
+
+ if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+ skcipher_inst->alg.setkey = essiv_skcipher_setkey;
+ skcipher_inst->alg.encrypt = essiv_skcipher_encrypt;
+ skcipher_inst->alg.decrypt = essiv_skcipher_decrypt;
+ skcipher_inst->alg.init = essiv_skcipher_init_tfm;
+ skcipher_inst->alg.exit = essiv_skcipher_exit_tfm;
+
+ skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg);
+ skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg);
+ skcipher_inst->alg.ivsize = ESSIV_IV_SIZE;
+ skcipher_inst->alg.chunksize = skcipher_alg->chunksize;
+ skcipher_inst->alg.walksize = skcipher_alg->walksize;
+
+ skcipher_inst->free = essiv_skcipher_free_instance;
+
+ err = skcipher_register_instance(tmpl, skcipher_inst);
+ } else {
+ aead_inst->alg.setkey = essiv_aead_setkey;
+ aead_inst->alg.setauthsize = essiv_aead_setauthsize;
+ aead_inst->alg.encrypt = essiv_aead_encrypt;
+ aead_inst->alg.decrypt = essiv_aead_decrypt;
+ aead_inst->alg.init = essiv_aead_init_tfm;
+ aead_inst->alg.exit = essiv_aead_exit_tfm;
+
+ aead_inst->alg.ivsize = ESSIV_IV_SIZE;
+ aead_inst->alg.maxauthsize = aead_alg->maxauthsize;
+ aead_inst->alg.chunksize = aead_alg->chunksize;
+
+ aead_inst->free = essiv_aead_free_instance;
+
+ err = aead_register_instance(tmpl, aead_inst);
+ }
+
+ if (err)
+ goto out_drop_hash;
+
+ crypto_mod_put(_hash_alg);
+ return 0;
+
+out_drop_hash:
+ crypto_drop_shash(&ictx->hash_spawn);
+out_put_hash:
+ crypto_mod_put(_hash_alg);
+out_drop_essiv_cipher:
+ crypto_drop_spawn(&ictx->essiv_cipher_spawn);
+out_drop_skcipher:
+ if (type == CRYPTO_ALG_TYPE_BLKCIPHER)
+ crypto_drop_skcipher(&ictx->u.skcipher_spawn);
+ else
+ crypto_drop_aead(&ictx->u.aead_spawn);
+out_free_inst:
+ kfree(skcipher_inst);
+ kfree(aead_inst);
+ return err;
+}
+
+/* essiv(inner_cipher_name, essiv_cipher_name, shash_name) */
+static struct crypto_template essiv_tmpl = {
+ .name = "essiv",
+ .create = essiv_create,
+ .module = THIS_MODULE,
+};
+
+static int __init essiv_module_init(void)
+{
+ return crypto_register_template(&essiv_tmpl);
+}
+
+static void __exit essiv_module_exit(void)
+{
+ crypto_unregister_template(&essiv_tmpl);
+}
+
+subsys_initcall(essiv_module_init);
+module_exit(essiv_module_exit);
+
+MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("essiv");
--
2.17.1

2019-06-21 08:10:00

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v4 5/6] crypto: essiv - add test vector for essiv(cbc(aes),aes,sha256)

From: Ard Biesheuvel <[email protected]>

Add a test vector for the ESSIV mode that is the most widely used,
i.e., using cbc(aes) and sha256.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/tcrypt.c | 9 +
crypto/testmgr.c | 6 +
crypto/testmgr.h | 208 ++++++++++++++++++++
3 files changed, 223 insertions(+)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index ad78ab5b93cb..f990a209197e 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -2327,6 +2327,15 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
0, speed_template_32);
break;

+ case 220:
+ test_acipher_speed("essiv(cbc(aes),aes,sha256)",
+ ENCRYPT, sec, NULL, 0,
+ speed_template_16_24_32);
+ test_acipher_speed("essiv(cbc(aes),aes,sha256)",
+ DECRYPT, sec, NULL, 0,
+ speed_template_16_24_32);
+ break;
+
case 300:
if (alg) {
test_hash_speed(alg, sec, generic_hash_speed_template);
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 658a7eeebab2..23703f3e9cbb 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4253,6 +4253,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.akcipher = __VECS(ecrdsa_tv_template)
}
+ }, {
+ .alg = "essiv(cbc(aes),aes,sha256)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = __VECS(essiv_aes_cbc_tv_template)
+ }
}, {
.alg = "gcm(aes)",
.generic_driver = "gcm_base(ctr(aes-generic),ghash-generic)",
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 1fdae5993bc3..e515e74d6a40 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -33575,4 +33575,212 @@ static const struct comp_testvec zstd_decomp_tv_template[] = {
"functions.",
},
};
+
+/* based on aes_cbc_tv_template */
+static const struct cipher_testvec essiv_aes_cbc_tv_template[] = {
+ {
+ .key = "\x06\xa9\x21\x40\x36\xb8\xa1\x5b"
+ "\x51\x2e\x03\xd5\x34\x12\x00\x06",
+ .klen = 16,
+ .iv = "\x3d\xaf\xba\x42\x9d\x9e\xb4\x30",
+ .ptext = "Single block msg",
+ .ctext = "\xfa\x59\xe7\x5f\x41\x56\x65\xc3"
+ "\x36\xca\x6b\x72\x10\x9f\x8c\xd4",
+ .len = 16,
+ }, {
+ .key = "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0"
+ "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a",
+ .klen = 16,
+ .iv = "\x56\x2e\x17\x99\x6d\x09\x3d\x28",
+ .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+ "\x10\x11\x12\x13\x14\x15\x16\x17"
+ "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
+ .ctext = "\xc8\x59\x9a\xfe\x79\xe6\x7b\x20"
+ "\x06\x7d\x55\x0a\x5e\xc7\xb5\xa7"
+ "\x0b\x9c\x80\xd2\x15\xa1\xb8\x6d"
+ "\xc6\xab\x7b\x65\xd9\xfd\x88\xeb",
+ .len = 32,
+ }, {
+ .key = "\x8e\x73\xb0\xf7\xda\x0e\x64\x52"
+ "\xc8\x10\xf3\x2b\x80\x90\x79\xe5"
+ "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b",
+ .klen = 24,
+ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+ "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+ "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+ "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+ "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+ "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+ "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+ .ctext = "\x96\x6d\xa9\x7a\x42\xe6\x01\xc7"
+ "\x17\xfc\xa7\x41\xd3\x38\x0b\xe5"
+ "\x51\x48\xf7\x7e\x5e\x26\xa9\xfe"
+ "\x45\x72\x1c\xd9\xde\xab\xf3\x4d"
+ "\x39\x47\xc5\x4f\x97\x3a\x55\x63"
+ "\x80\x29\x64\x4c\x33\xe8\x21\x8a"
+ "\x6a\xef\x6b\x6a\x8f\x43\xc0\xcb"
+ "\xf0\xf3\x6e\x74\x54\x44\x92\x44",
+ .len = 64,
+ }, {
+ .key = "\x60\x3d\xeb\x10\x15\xca\x71\xbe"
+ "\x2b\x73\xae\xf0\x85\x7d\x77\x81"
+ "\x1f\x35\x2c\x07\x3b\x61\x08\xd7"
+ "\x2d\x98\x10\xa3\x09\x14\xdf\xf4",
+ .klen = 32,
+ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07",
+ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+ "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+ "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+ "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+ "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+ "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+ "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+ .ctext = "\x24\x52\xf1\x48\x74\xd0\xa7\x93"
+ "\x75\x9b\x63\x46\xc0\x1c\x1e\x17"
+ "\x4d\xdc\x5b\x3a\x27\x93\x2a\x63"
+ "\xf7\xf1\xc7\xb3\x54\x56\x5b\x50"
+ "\xa3\x31\xa5\x8b\xd6\xfd\xb6\x3c"
+ "\x8b\xf6\xf2\x45\x05\x0c\xc8\xbb"
+ "\x32\x0b\x26\x1c\xe9\x8b\x02\xc0"
+ "\xb2\x6f\x37\xa7\x5b\xa8\xa9\x42",
+ .len = 64,
+ }, {
+ .key = "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55"
+ "\x0F\x32\x55\x78\x9B\xBE\x78\x9B"
+ "\xBE\xE1\x04\x27\xE1\x04\x27\x4A"
+ "\x6D\x90\x4A\x6D\x90\xB3\xD6\xF9",
+ .klen = 32,
+ .iv = "\xE7\x82\x1D\xB8\x53\x11\xAC\x47",
+ .ptext = "\x50\xB9\x22\xAE\x17\x80\x0C\x75"
+ "\xDE\x47\xD3\x3C\xA5\x0E\x9A\x03"
+ "\x6C\xF8\x61\xCA\x33\xBF\x28\x91"
+ "\x1D\x86\xEF\x58\xE4\x4D\xB6\x1F"
+ "\xAB\x14\x7D\x09\x72\xDB\x44\xD0"
+ "\x39\xA2\x0B\x97\x00\x69\xF5\x5E"
+ "\xC7\x30\xBC\x25\x8E\x1A\x83\xEC"
+ "\x55\xE1\x4A\xB3\x1C\xA8\x11\x7A"
+ "\x06\x6F\xD8\x41\xCD\x36\x9F\x08"
+ "\x94\xFD\x66\xF2\x5B\xC4\x2D\xB9"
+ "\x22\x8B\x17\x80\xE9\x52\xDE\x47"
+ "\xB0\x19\xA5\x0E\x77\x03\x6C\xD5"
+ "\x3E\xCA\x33\x9C\x05\x91\xFA\x63"
+ "\xEF\x58\xC1\x2A\xB6\x1F\x88\x14"
+ "\x7D\xE6\x4F\xDB\x44\xAD\x16\xA2"
+ "\x0B\x74\x00\x69\xD2\x3B\xC7\x30"
+ "\x99\x02\x8E\xF7\x60\xEC\x55\xBE"
+ "\x27\xB3\x1C\x85\x11\x7A\xE3\x4C"
+ "\xD8\x41\xAA\x13\x9F\x08\x71\xFD"
+ "\x66\xCF\x38\xC4\x2D\x96\x22\x8B"
+ "\xF4\x5D\xE9\x52\xBB\x24\xB0\x19"
+ "\x82\x0E\x77\xE0\x49\xD5\x3E\xA7"
+ "\x10\x9C\x05\x6E\xFA\x63\xCC\x35"
+ "\xC1\x2A\x93\x1F\x88\xF1\x5A\xE6"
+ "\x4F\xB8\x21\xAD\x16\x7F\x0B\x74"
+ "\xDD\x46\xD2\x3B\xA4\x0D\x99\x02"
+ "\x6B\xF7\x60\xC9\x32\xBE\x27\x90"
+ "\x1C\x85\xEE\x57\xE3\x4C\xB5\x1E"
+ "\xAA\x13\x7C\x08\x71\xDA\x43\xCF"
+ "\x38\xA1\x0A\x96\xFF\x68\xF4\x5D"
+ "\xC6\x2F\xBB\x24\x8D\x19\x82\xEB"
+ "\x54\xE0\x49\xB2\x1B\xA7\x10\x79"
+ "\x05\x6E\xD7\x40\xCC\x35\x9E\x07"
+ "\x93\xFC\x65\xF1\x5A\xC3\x2C\xB8"
+ "\x21\x8A\x16\x7F\xE8\x51\xDD\x46"
+ "\xAF\x18\xA4\x0D\x76\x02\x6B\xD4"
+ "\x3D\xC9\x32\x9B\x04\x90\xF9\x62"
+ "\xEE\x57\xC0\x29\xB5\x1E\x87\x13"
+ "\x7C\xE5\x4E\xDA\x43\xAC\x15\xA1"
+ "\x0A\x73\xFF\x68\xD1\x3A\xC6\x2F"
+ "\x98\x01\x8D\xF6\x5F\xEB\x54\xBD"
+ "\x26\xB2\x1B\x84\x10\x79\xE2\x4B"
+ "\xD7\x40\xA9\x12\x9E\x07\x70\xFC"
+ "\x65\xCE\x37\xC3\x2C\x95\x21\x8A"
+ "\xF3\x5C\xE8\x51\xBA\x23\xAF\x18"
+ "\x81\x0D\x76\xDF\x48\xD4\x3D\xA6"
+ "\x0F\x9B\x04\x6D\xF9\x62\xCB\x34"
+ "\xC0\x29\x92\x1E\x87\xF0\x59\xE5"
+ "\x4E\xB7\x20\xAC\x15\x7E\x0A\x73"
+ "\xDC\x45\xD1\x3A\xA3\x0C\x98\x01"
+ "\x6A\xF6\x5F\xC8\x31\xBD\x26\x8F"
+ "\x1B\x84\xED\x56\xE2\x4B\xB4\x1D"
+ "\xA9\x12\x7B\x07\x70\xD9\x42\xCE"
+ "\x37\xA0\x09\x95\xFE\x67\xF3\x5C"
+ "\xC5\x2E\xBA\x23\x8C\x18\x81\xEA"
+ "\x53\xDF\x48\xB1\x1A\xA6\x0F\x78"
+ "\x04\x6D\xD6\x3F\xCB\x34\x9D\x06"
+ "\x92\xFB\x64\xF0\x59\xC2\x2B\xB7"
+ "\x20\x89\x15\x7E\xE7\x50\xDC\x45"
+ "\xAE\x17\xA3\x0C\x75\x01\x6A\xD3"
+ "\x3C\xC8\x31\x9A\x03\x8F\xF8\x61"
+ "\xED\x56\xBF\x28\xB4\x1D\x86\x12",
+ .ctext = "\x97\x7f\x69\x0f\x0f\x34\xa6\x33"
+ "\x66\x49\x7e\xd0\x4d\x1b\xc9\x64"
+ "\xf9\x61\x95\x98\x11\x00\x88\xf8"
+ "\x2e\x88\x01\x0f\x2b\xe1\xae\x3e"
+ "\xfe\xd6\x47\x30\x11\x68\x7d\x99"
+ "\xad\x69\x6a\xe8\x41\x5f\x1e\x16"
+ "\x00\x3a\x47\xdf\x8e\x7d\x23\x1c"
+ "\x19\x5b\x32\x76\x60\x03\x05\xc1"
+ "\xa0\xff\xcf\xcc\x74\x39\x46\x63"
+ "\xfe\x5f\xa6\x35\xa7\xb4\xc1\xf9"
+ "\x4b\x5e\x38\xcc\x8c\xc1\xa2\xcf"
+ "\x9a\xc3\xae\x55\x42\x46\x93\xd9"
+ "\xbd\x22\xd3\x8a\x19\x96\xc3\xb3"
+ "\x7d\x03\x18\xf9\x45\x09\x9c\xc8"
+ "\x90\xf3\x22\xb3\x25\x83\x9a\x75"
+ "\xbb\x04\x48\x97\x3a\x63\x08\x04"
+ "\xa0\x69\xf6\x52\xd4\x89\x93\x69"
+ "\xb4\x33\xa2\x16\x58\xec\x4b\x26"
+ "\x76\x54\x10\x0b\x6e\x53\x1e\xbc"
+ "\x16\x18\x42\xb1\xb1\xd3\x4b\xda"
+ "\x06\x9f\x8b\x77\xf7\xab\xd6\xed"
+ "\xa3\x1d\x90\xda\x49\x38\x20\xb8"
+ "\x6c\xee\xae\x3e\xae\x6c\x03\xb8"
+ "\x0b\xed\xc8\xaa\x0e\xc5\x1f\x90"
+ "\x60\xe2\xec\x1b\x76\xd0\xcf\xda"
+ "\x29\x1b\xb8\x5a\xbc\xf4\xba\x13"
+ "\x91\xa6\xcb\x83\x3f\xeb\xe9\x7b"
+ "\x03\xba\x40\x9e\xe6\x7a\xb2\x4a"
+ "\x73\x49\xfc\xed\xfb\x55\xa4\x24"
+ "\xc7\xa4\xd7\x4b\xf5\xf7\x16\x62"
+ "\x80\xd3\x19\x31\x52\x25\xa8\x69"
+ "\xda\x9a\x87\xf5\xf2\xee\x5d\x61"
+ "\xc1\x12\x72\x3e\x52\x26\x45\x3a"
+ "\xd8\x9d\x57\xfa\x14\xe2\x9b\x2f"
+ "\xd4\xaa\x5e\x31\xf4\x84\x89\xa4"
+ "\xe3\x0e\xb0\x58\x41\x75\x6a\xcb"
+ "\x30\x01\x98\x90\x15\x80\xf5\x27"
+ "\x92\x13\x81\xf0\x1c\x1e\xfc\xb1"
+ "\x33\xf7\x63\xb0\x67\xec\x2e\x5c"
+ "\x85\xe3\x5b\xd0\x43\x8a\xb8\x5f"
+ "\x44\x9f\xec\x19\xc9\x8f\xde\xdf"
+ "\x79\xef\xf8\xee\x14\x87\xb3\x34"
+ "\x76\x00\x3a\x9b\xc7\xed\xb1\x3d"
+ "\xef\x07\xb0\xe4\xfd\x68\x9e\xeb"
+ "\xc2\xb4\x1a\x85\x9a\x7d\x11\x88"
+ "\xf8\xab\x43\x55\x2b\x8a\x4f\x60"
+ "\x85\x9a\xf4\xba\xae\x48\x81\xeb"
+ "\x93\x07\x97\x9e\xde\x2a\xfc\x4e"
+ "\x31\xde\xaa\x44\xf7\x2a\xc3\xee"
+ "\x60\xa2\x98\x2c\x0a\x88\x50\xc5"
+ "\x6d\x89\xd3\xe4\xb6\xa7\xf4\xb0"
+ "\xcf\x0e\x89\xe3\x5e\x8f\x82\xf4"
+ "\x9d\xd1\xa9\x51\x50\x8a\xd2\x18"
+ "\x07\xb2\xaa\x3b\x7f\x58\x9b\xf4"
+ "\xb7\x24\x39\xd3\x66\x2f\x1e\xc0"
+ "\x11\xa3\x56\x56\x2a\x10\x73\xbc"
+ "\xe1\x23\xbf\xa9\x37\x07\x9c\xc3"
+ "\xb2\xc9\xa8\x1c\x5b\x5c\x58\xa4"
+ "\x77\x02\x26\xad\xc3\x40\x11\x53"
+ "\x93\x68\x72\xde\x05\x8b\x10\xbc"
+ "\xa6\xd4\x1b\xd9\x27\xd8\x16\x12"
+ "\x61\x2b\x31\x2a\x44\x87\x96\x58",
+ .len = 496,
+ },
+};
+
#endif /* _CRYPTO_TESTMGR_H */
--
2.17.1

2019-06-21 08:10:02

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v4 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode

From: Ard Biesheuvel <[email protected]>

Add an accelerated version of the 'essiv(cbc(aes),aes,sha256'
skcipher, which is used by fscrypt, and in some cases, by dm-crypt.
This avoids a separate call into the AES cipher for every invocation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/arm64/crypto/aes-glue.c | 129 ++++++++++++++++++++
arch/arm64/crypto/aes-modes.S | 99 +++++++++++++++
2 files changed, 228 insertions(+)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index f0ceb545bd1e..6dab2f062cea 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -12,6 +12,7 @@
#include <asm/hwcap.h>
#include <asm/simd.h>
#include <crypto/aes.h>
+#include <crypto/sha.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
@@ -34,6 +35,8 @@
#define aes_cbc_decrypt ce_aes_cbc_decrypt
#define aes_cbc_cts_encrypt ce_aes_cbc_cts_encrypt
#define aes_cbc_cts_decrypt ce_aes_cbc_cts_decrypt
+#define aes_essiv_cbc_encrypt ce_aes_essiv_cbc_encrypt
+#define aes_essiv_cbc_decrypt ce_aes_essiv_cbc_decrypt
#define aes_ctr_encrypt ce_aes_ctr_encrypt
#define aes_xts_encrypt ce_aes_xts_encrypt
#define aes_xts_decrypt ce_aes_xts_decrypt
@@ -50,6 +53,8 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions");
#define aes_cbc_decrypt neon_aes_cbc_decrypt
#define aes_cbc_cts_encrypt neon_aes_cbc_cts_encrypt
#define aes_cbc_cts_decrypt neon_aes_cbc_cts_decrypt
+#define aes_essiv_cbc_encrypt neon_aes_essiv_cbc_encrypt
+#define aes_essiv_cbc_decrypt neon_aes_essiv_cbc_decrypt
#define aes_ctr_encrypt neon_aes_ctr_encrypt
#define aes_xts_encrypt neon_aes_xts_encrypt
#define aes_xts_decrypt neon_aes_xts_decrypt
@@ -93,6 +98,13 @@ asmlinkage void aes_xts_decrypt(u8 out[], u8 const in[], u32 const rk1[],
int rounds, int blocks, u32 const rk2[], u8 iv[],
int first);

+asmlinkage void aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[],
+ int rounds, int blocks, u32 const rk2[],
+ u8 iv[], int first);
+asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
+ int rounds, int blocks, u32 const rk2[],
+ u8 iv[], int first);
+
asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds,
int blocks, u8 dg[], int enc_before,
int enc_after);
@@ -108,6 +120,12 @@ struct crypto_aes_xts_ctx {
struct crypto_aes_ctx __aligned(8) key2;
};

+struct crypto_aes_essiv_cbc_ctx {
+ struct crypto_aes_ctx key1;
+ struct crypto_aes_ctx __aligned(8) key2;
+ struct crypto_shash *hash;
+};
+
struct mac_tfm_ctx {
struct crypto_aes_ctx key;
u8 __aligned(8) consts[];
@@ -145,6 +163,31 @@ static int xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
return -EINVAL;
}

+static int essiv_cbc_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
+{
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ SHASH_DESC_ON_STACK(desc, ctx->hash);
+ u8 digest[SHA256_DIGEST_SIZE];
+ int ret;
+
+ ret = aes_expandkey(&ctx->key1, in_key, key_len);
+ if (ret)
+ goto out;
+
+ desc->tfm = ctx->hash;
+ crypto_shash_digest(desc, in_key, key_len, digest);
+
+ ret = aes_expandkey(&ctx->key2, digest, sizeof(digest));
+ if (ret)
+ goto out;
+
+ return 0;
+out:
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ return -EINVAL;
+}
+
static int ecb_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -361,6 +404,74 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
return skcipher_walk_done(&walk, 0);
}

+static int essiv_cbc_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ ctx->hash = crypto_alloc_shash("sha256", 0, 0);
+ if (IS_ERR(ctx->hash))
+ return PTR_ERR(ctx->hash);
+
+ return 0;
+}
+
+static void essiv_cbc_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_shash(ctx->hash);
+}
+
+static int essiv_cbc_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int err, first, rounds = 6 + ctx->key1.key_length / 4;
+ struct skcipher_walk walk;
+ u8 iv[AES_BLOCK_SIZE];
+ unsigned int blocks;
+
+ memcpy(iv, req->iv, crypto_skcipher_ivsize(tfm));
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
+ kernel_neon_begin();
+ aes_essiv_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
+ ctx->key1.key_enc, rounds, blocks,
+ ctx->key2.key_enc, iv, first);
+ kernel_neon_end();
+ err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
+static int essiv_cbc_decrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int err, first, rounds = 6 + ctx->key1.key_length / 4;
+ struct skcipher_walk walk;
+ u8 iv[AES_BLOCK_SIZE];
+ unsigned int blocks;
+
+ memcpy(iv, req->iv, crypto_skcipher_ivsize(tfm));
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
+ kernel_neon_begin();
+ aes_essiv_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
+ ctx->key1.key_dec, rounds, blocks,
+ ctx->key2.key_enc, iv, first);
+ kernel_neon_end();
+ err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
+ }
+
+ return err;
+}
+
static int ctr_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -504,6 +615,24 @@ static struct skcipher_alg aes_algs[] = { {
.encrypt = cts_cbc_encrypt,
.decrypt = cts_cbc_decrypt,
.init = cts_cbc_init_tfm,
+}, {
+ .base = {
+ .cra_name = "__essiv(cbc(aes),aes,sha256)",
+ .cra_driver_name = "__essiv-cbc-aes-sha256-" MODE,
+ .cra_priority = PRIO + 1,
+ .cra_flags = CRYPTO_ALG_INTERNAL,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crypto_aes_essiv_cbc_ctx),
+ .cra_module = THIS_MODULE,
+ },
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = sizeof(u64),
+ .setkey = essiv_cbc_set_key,
+ .encrypt = essiv_cbc_encrypt,
+ .decrypt = essiv_cbc_decrypt,
+ .init = essiv_cbc_init_tfm,
+ .exit = essiv_cbc_exit_tfm,
}, {
.base = {
.cra_name = "__ctr(aes)",
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
index 4c7ce231963c..4ebc61375aa6 100644
--- a/arch/arm64/crypto/aes-modes.S
+++ b/arch/arm64/crypto/aes-modes.S
@@ -247,6 +247,105 @@ AES_ENDPROC(aes_cbc_cts_decrypt)
.byte 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
.previous

+ /*
+ * aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[],
+ * int rounds, int blocks, u32 const rk2[],
+ * u8 iv[], int first);
+ * aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
+ * int rounds, int blocks, u32 const rk2[],
+ * u8 iv[], int first);
+ */
+
+AES_ENTRY(aes_essiv_cbc_encrypt)
+ ld1 {v4.16b}, [x6] /* get iv */
+ cbz x7, .Lessivcbcencnotfirst
+
+ mov w8, #14 /* AES-256: 14 rounds */
+ enc_prepare w8, x5, x7
+ mov v4.8b, v4.8b
+ encrypt_block v4, w8, x5, x7, w9
+
+.Lessivcbcencnotfirst:
+ enc_prepare w3, x2, x7
+.Lessivcbcencloop4x:
+ subs w4, w4, #4
+ bmi .Lessivcbcenc1x
+ ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 pt blocks */
+ eor v0.16b, v0.16b, v4.16b /* ..and xor with iv */
+ encrypt_block v0, w3, x2, x7, w8
+ eor v1.16b, v1.16b, v0.16b
+ encrypt_block v1, w3, x2, x7, w8
+ eor v2.16b, v2.16b, v1.16b
+ encrypt_block v2, w3, x2, x7, w8
+ eor v3.16b, v3.16b, v2.16b
+ encrypt_block v3, w3, x2, x7, w8
+ st1 {v0.16b-v3.16b}, [x0], #64
+ mov v4.16b, v3.16b
+ b .Lessivcbcencloop4x
+.Lessivcbcenc1x:
+ adds w4, w4, #4
+ beq .Lessivcbcencout
+.Lessivcbcencloop:
+ ld1 {v0.16b}, [x1], #16 /* get next pt block */
+ eor v4.16b, v4.16b, v0.16b /* ..and xor with iv */
+ encrypt_block v4, w3, x2, x6, w7
+ st1 {v4.16b}, [x0], #16
+ subs w4, w4, #1
+ bne .Lessivcbcencloop
+.Lessivcbcencout:
+ st1 {v4.16b}, [x6] /* return iv */
+ ret
+AES_ENDPROC(aes_essiv_cbc_encrypt)
+
+
+AES_ENTRY(aes_essiv_cbc_decrypt)
+ stp x29, x30, [sp, #-16]!
+ mov x29, sp
+
+ ld1 {v7.16b}, [x6] /* get iv */
+ cbz x7, .Lessivcbcdecnotfirst
+
+ mov w8, #14 /* AES-256: 14 rounds */
+ enc_prepare w8, x5, x7
+ mov v7.8b, v7.8b
+ encrypt_block v7, w8, x5, x7, w9
+
+.Lessivcbcdecnotfirst:
+ dec_prepare w3, x2, x7
+.LessivcbcdecloopNx:
+ subs w4, w4, #4
+ bmi .Lessivcbcdec1x
+ ld1 {v0.16b-v3.16b}, [x1], #64 /* get 4 ct blocks */
+ mov v4.16b, v0.16b
+ mov v5.16b, v1.16b
+ mov v6.16b, v2.16b
+ bl aes_decrypt_block4x
+ sub x1, x1, #16
+ eor v0.16b, v0.16b, v7.16b
+ eor v1.16b, v1.16b, v4.16b
+ ld1 {v7.16b}, [x1], #16 /* reload 1 ct block */
+ eor v2.16b, v2.16b, v5.16b
+ eor v3.16b, v3.16b, v6.16b
+ st1 {v0.16b-v3.16b}, [x0], #64
+ b .LessivcbcdecloopNx
+.Lessivcbcdec1x:
+ adds w4, w4, #4
+ beq .Lessivcbcdecout
+.Lessivcbcdecloop:
+ ld1 {v1.16b}, [x1], #16 /* get next ct block */
+ mov v0.16b, v1.16b /* ...and copy to v0 */
+ decrypt_block v0, w3, x2, x7, w8
+ eor v0.16b, v0.16b, v7.16b /* xor with iv => pt */
+ mov v7.16b, v1.16b /* ct is next iv */
+ st1 {v0.16b}, [x0], #16
+ subs w4, w4, #1
+ bne .Lessivcbcdecloop
+.Lessivcbcdecout:
+ st1 {v7.16b}, [x6] /* return iv */
+ ldp x29, x30, [sp], #16
+ ret
+AES_ENDPROC(aes_essiv_cbc_decrypt)
+

/*
* aes_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds,
--
2.17.1

2019-06-23 09:49:31

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation

On Fri, 21 Jun 2019 at 10:09, Ard Biesheuvel <[email protected]> wrote:
>
> From: Ard Biesheuvel <[email protected]>
>
...
>
> - given that hardware already exists that can perform en/decryption including
> ESSIV generation of a range of blocks, it would be useful to encapsulate
> this in the ESSIV template, and teach at least dm-crypt how to use it
> (given that it often processes 8 512-byte sectors at a time)

I thought about this a bit more, and it occurred to me that the
capability of issuing several sectors at a time and letting the lower
layers increment the IV between sectors is orthogonal to whether ESSIV
is being used or not, and so it probably belongs in another wrapper.

I.e., if we define a skcipher template like dmplain64le(), which is
defined as taking a sector size as part of the key, and which
increments a 64 LE counter between sectors if multiple are passed, it
can be used not only for ESSIV but also for XTS, which I assume can be
h/w accelerated in the same way.

So with that in mind, I think we should decouple the multi-sector
discussion and leave it for a followup series, preferably proposed by
someone who also has access to some hardware to prototype it on.

2019-06-23 10:13:42

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation

On Sun, Jun 23, 2019 at 11:30:41AM +0200, Ard Biesheuvel wrote:
>
> So with that in mind, I think we should decouple the multi-sector
> discussion and leave it for a followup series, preferably proposed by
> someone who also has access to some hardware to prototype it on.

Yes that makes sense.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-06-24 07:14:05

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation

On 23/06/2019 12:12, Herbert Xu wrote:
> On Sun, Jun 23, 2019 at 11:30:41AM +0200, Ard Biesheuvel wrote:
>>
>> So with that in mind, I think we should decouple the multi-sector
>> discussion and leave it for a followup series, preferably proposed by
>> someone who also has access to some hardware to prototype it on.
>
> Yes that makes sense.

Yes.

And TBH, the most important optimization for dm-crypt in this case
is processing 8 512-bytes sectors in 4k encryption block (because page
cache will generate page-sized bios) and with XTS mode and linear IV (plain64),
not ESSIV.

Dm-crypt can use 4k sectors directly, there are only two
blockers - you need LUKS2 to support it, and many devices
just do not advertise physical 4k sectors (many SSDs).
So switching to 4k could cause some problems with partial 4k writes
(after a crash or power-fail).

The plan for the dm-crypt side is more to focus on using 4k native
sectors than this micro-optimization in HW.

Milan

2019-06-24 07:20:36

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] md: dm-crypt: switch to ESSIV crypto API template

On 21/06/2019 10:09, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <[email protected]>
>
> Replace the explicit ESSIV handling in the dm-crypt driver with calls
> into the crypto API, which now possesses the capability to perform
> this processing within the crypto subsystem.

I tried a few crazy dm-crypt configurations and was not able to crash it this time :-)

So, it definitely need some more testing, but for now, I think it works.

Few comments below for this part:

> --- a/drivers/md/dm-crypt.c
> +++ b/drivers/md/dm-crypt.c

> static const struct crypt_iv_operations crypt_iv_benbi_ops = {
> .ctr = crypt_iv_benbi_ctr,
> .dtr = crypt_iv_benbi_dtr,
> @@ -2283,7 +2112,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
> else if (strcmp(ivmode, "plain64be") == 0)
> cc->iv_gen_ops = &crypt_iv_plain64be_ops;
> else if (strcmp(ivmode, "essiv") == 0)
> - cc->iv_gen_ops = &crypt_iv_essiv_ops;
> + cc->iv_gen_ops = &crypt_iv_plain64_ops;

This is quite misleading - it looks like you are switching to plain64 here.
The reality is that it uses plain64 to feed the ESSIV wrapper.

So either it need some comment to explain it here, or just keep simple essiv_iv_ops
and duplicate that plain64 generator (it is 2 lines of code).

For the clarity, I would prefer the second variant (duplicate ops) here.

> @@ -2515,8 +2357,18 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
> if (!cipher_api)
> goto bad_mem;
>
> - ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
> - "%s(%s)", chainmode, cipher);
> + if (*ivmode && !strcmp(*ivmode, "essiv")) {
> + if (!*ivopts) {
> + ti->error = "Digest algorithm missing for ESSIV mode";
> + return -EINVAL;
> + }
> + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
> + "essiv(%s(%s),%s,%s)", chainmode, cipher,
> + cipher, *ivopts);

This becomes quite long string already (limit is now 128 bytes), we should probably
check also for too long string. It will perhaps fail later, but I would better add

if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) {
...

> + } else {
> + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
> + "%s(%s)", chainmode, cipher);
> + }
> if (ret < 0) {
> kfree(cipher_api);
> goto bad_mem;
>

Thanks,
Milan

2019-06-24 07:41:35

by Surachai Saiwong

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v4 4/6] md: dm-crypt: switch to ESSIV crypto API template

2562-06-24 14:05 GMT+07:00, Milan Broz <[email protected]>:
> On 21/06/2019 10:09, Ard Biesheuvel wrote:
>> From: Ard Biesheuvel <[email protected]>
>>
>> Replace the explicit ESSIV handling in the dm-crypt driver with calls
>> into the crypto API, which now possesses the capability to perform
>> this processing within the crypto subsystem.
>
> I tried a few crazy dm-crypt configurations and was not able to crash it
> this time :-)
>
> So, it definitely need some more testing, but for now, I think it works.
>
> Few comments below for this part:
>
>> --- a/drivers/md/dm-crypt.c
>> +++ b/drivers/md/dm-crypt.c
>
>> static const struct crypt_iv_operations crypt_iv_benbi_ops = {
>> .ctr = crypt_iv_benbi_ctr,
>> .dtr = crypt_iv_benbi_dtr,
>> @@ -2283,7 +2112,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti,
>> const char *ivmode)
>> else if (strcmp(ivmode, "plain64be") == 0)
>> cc->iv_gen_ops = &crypt_iv_plain64be_ops;
>> else if (strcmp(ivmode, "essiv") == 0)
>> - cc->iv_gen_ops = &crypt_iv_essiv_ops;
>> + cc->iv_gen_ops = &crypt_iv_plain64_ops;
>
> This is quite misleading - it looks like you are switching to plain64 here.
> The reality is that it uses plain64 to feed the ESSIV wrapper.
>
> So either it need some comment to explain it here, or just keep simple
> essiv_iv_ops
> and duplicate that plain64 generator (it is 2 lines of code).
>
> For the clarity, I would prefer the second variant (duplicate ops) here.
>
>> @@ -2515,8 +2357,18 @@ static int crypt_ctr_cipher_old(struct dm_target
>> *ti, char *cipher_in, char *key
>> if (!cipher_api)
>> goto bad_mem;
>>
>> - ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
>> - "%s(%s)", chainmode, cipher);
>> + if (*ivmode && !strcmp(*ivmode, "essiv")) {
>> + if (!*ivopts) {
>> + ti->error = "Digest algorithm missing for ESSIV mode";
>> + return -EINVAL;
>> + }
>> + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
>> + "essiv(%s(%s),%s,%s)", chainmode, cipher,
>> + cipher, *ivopts);
>
> This becomes quite long string already (limit is now 128 bytes), we should
> probably
> check also for too long string. It will perhaps fail later, but I would
> better add
>
> if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) {
> ...
>
>> + } else {
>> + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
>> + "%s(%s)", chainmode, cipher);
>> + }
>> if (ret < 0) {
>> kfree(cipher_api);
>> goto bad_mem;
>>
>
> Thanks,
> Milan
>
> --
> dm-devel mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/dm-devel
>

2019-06-26 04:50:06

by Eric Biggers

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation

On Sun, Jun 23, 2019 at 11:30:41AM +0200, Ard Biesheuvel wrote:
> On Fri, 21 Jun 2019 at 10:09, Ard Biesheuvel <[email protected]> wrote:
> >
> > From: Ard Biesheuvel <[email protected]>
> >
> ...
> >
> > - given that hardware already exists that can perform en/decryption including
> > ESSIV generation of a range of blocks, it would be useful to encapsulate
> > this in the ESSIV template, and teach at least dm-crypt how to use it
> > (given that it often processes 8 512-byte sectors at a time)
>
> I thought about this a bit more, and it occurred to me that the
> capability of issuing several sectors at a time and letting the lower
> layers increment the IV between sectors is orthogonal to whether ESSIV
> is being used or not, and so it probably belongs in another wrapper.
>
> I.e., if we define a skcipher template like dmplain64le(), which is
> defined as taking a sector size as part of the key, and which
> increments a 64 LE counter between sectors if multiple are passed, it
> can be used not only for ESSIV but also for XTS, which I assume can be
> h/w accelerated in the same way.
>
> So with that in mind, I think we should decouple the multi-sector
> discussion and leave it for a followup series, preferably proposed by
> someone who also has access to some hardware to prototype it on.
>

This makes sense, but if we're going to leave that functionality out of the
essiv template, I think we should revisit whether the essiv template takes a
__le64 sector number vs. just an IV matching the cipher block size. To me,
defining the IV to be a __le64 seems like a layering violation. Also, dm-crypt
and fscrypt already know how to zero-pad the sector number to form the full 16
byte IV, and your patch just duplicates that logic in the essiv template too,
which makes it more complicated than necessary.

E.g., the following incremental patch for the skcipher case would simplify it:

(You'd have to do it for the AEAD case too.)

diff --git a/crypto/essiv.c b/crypto/essiv.c
index 8e80814ec7d6..737e92ebcbd8 100644
--- a/crypto/essiv.c
+++ b/crypto/essiv.c
@@ -57,11 +57,6 @@ struct essiv_tfm_ctx {
struct crypto_shash *hash;
};

-struct essiv_skcipher_request_ctx {
- u8 iv[MAX_INNER_IV_SIZE];
- struct skcipher_request skcipher_req;
-};
-
struct essiv_aead_request_ctx {
u8 iv[2][MAX_INNER_IV_SIZE];
struct scatterlist src[4], dst[4];
@@ -161,39 +156,32 @@ static void essiv_skcipher_done(struct crypto_async_request *areq, int err)
skcipher_request_complete(req, err);
}

-static void essiv_skcipher_prepare_subreq(struct skcipher_request *req)
+static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
- struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
- struct skcipher_request *subreq = &rctx->skcipher_req;
-
- memset(rctx->iv, 0, crypto_cipher_blocksize(tctx->essiv_cipher));
- memcpy(rctx->iv, req->iv, crypto_skcipher_ivsize(tfm));
+ struct skcipher_request *subreq = skcipher_request_ctx(req);

- crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv);
+ crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv);

skcipher_request_set_tfm(subreq, tctx->u.skcipher);
skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
- rctx->iv);
+ req->iv);
skcipher_request_set_callback(subreq, skcipher_request_flags(req),
essiv_skcipher_done, req);
+
+ return enc ? crypto_skcipher_encrypt(subreq) :
+ crypto_skcipher_decrypt(subreq);
}

static int essiv_skcipher_encrypt(struct skcipher_request *req)
{
- struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-
- essiv_skcipher_prepare_subreq(req);
- return crypto_skcipher_encrypt(&rctx->skcipher_req);
+ return essiv_skcipher_crypt(req, true);
}

static int essiv_skcipher_decrypt(struct skcipher_request *req)
{
- struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-
- essiv_skcipher_prepare_subreq(req);
- return crypto_skcipher_decrypt(&rctx->skcipher_req);
+ return essiv_skcipher_crypt(req, false);
}

static void essiv_aead_done(struct crypto_async_request *areq, int err)
@@ -300,24 +288,14 @@ static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct crypto_skcipher *skcipher;
- unsigned int subreq_size;
int err;

- BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx,
- skcipher_req) !=
- sizeof(struct essiv_skcipher_request_ctx));
-
skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn);
if (IS_ERR(skcipher))
return PTR_ERR(skcipher);

- subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx,
- skcipher_req) +
- crypto_skcipher_reqsize(skcipher);
-
- crypto_skcipher_set_reqsize(tfm,
- offsetof(struct essiv_skcipher_request_ctx,
- skcipher_req) + subreq_size);
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
+ crypto_skcipher_reqsize(skcipher));

err = essiv_init_tfm(ictx, tctx);
if (err) {
@@ -567,9 +545,9 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)

skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg);
skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg);
- skcipher_inst->alg.ivsize = ESSIV_IV_SIZE;
- skcipher_inst->alg.chunksize = skcipher_alg->chunksize;
- skcipher_inst->alg.walksize = skcipher_alg->walksize;
+ skcipher_inst->alg.ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
+ skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg);
+ skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg);

skcipher_inst->free = essiv_skcipher_free_instance;