2020-10-29 10:06:22

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v3 0/4] crypto: switch to crypto API for EBOIV generation


This series creates an EBOIV template that produces a skcipher
transform which passes through all operations to the skcipher, while
using the same skcipher and key to encrypt the input IV, which is
assumed to be a sector offset, although this is not enforced.

This matches dm-crypt use of EBOIV to provide BitLocker support,
and so it is proposed as a replacement in patch #3.

Replacing the dm-crypt specific EBOIV implementation to a Crypto
API based one allows us to save a memory allocation per
each request, as well as opening the way for use of compatible
alternative transform providers, one of which, based on the
Arm TrustZone CryptoCell hardware, is proposed as patch #4.

Future potential work to allow encapsulating the handling of
multiple subsequent blocks by the Crypto API may also
benefit from this work.

The code has been tested on both x86_64 virtual machine
with the dm-crypt test suite and on an arm 32 bit board
with the CryptoCell hardware.

Since no offical source for eboiv test vectors is known,
the test vectors supplied as patch #2 are derived from
sectors which are part of the dm-crypt test suite.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
Cc: Eric Biggers <[email protected]>
Cc: Milan Broz <[email protected]>

Changes from v2:
- Remove needless internal include left over by mistake.

Changes from v1:
- Incorporated feedback from Eric Biggers regarding eboiv template code.
- Incorporated fixes for issues found by kernel test robot.
- Moved from a Kconfig dependency of DM_CRYPT to EBOIV to
EBOIV default of DM_CRYPT as suggested by Herbert Xu.



Gilad Ben-Yossef (4):
crypto: add eboiv as a crypto API template
crypto: add eboiv(cbc(aes)) test vectors
dm crypt: switch to EBOIV crypto API template
crypto: ccree: re-introduce ccree eboiv support

crypto/Kconfig | 23 +++
crypto/Makefile | 1 +
crypto/eboiv.c | 269 ++++++++++++++++++++++++++
crypto/tcrypt.c | 9 +
crypto/testmgr.c | 6 +
crypto/testmgr.h | 279 +++++++++++++++++++++++++++
drivers/crypto/ccree/cc_cipher.c | 132 +++++++++----
drivers/crypto/ccree/cc_crypto_ctx.h | 1 +
drivers/md/dm-crypt.c | 61 ++----
9 files changed, 702 insertions(+), 79 deletions(-)
create mode 100644 crypto/eboiv.c

--
2.28.0


2020-10-29 10:06:22

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v3 1/4] crypto: add eboiv as a crypto API template

Encrypted byte-offset initialization vector (EBOIV) is an IV
generation method that is used in some cases by dm-crypt for
supporting the BitLocker volume encryption used by Windows 8
and onwards as a backwards compatible version in lieu of XTS
support. Support for eboiv was added to dm-crypt in 5.6.

This patch re-implements eboiv as a generic crypto API
template, thus allowing use of a alternative architecture
specific optimzied implementations (as well as saving a
memory allocation along the way).

Signed-off-by: Gilad Ben-Yossef <[email protected]>
Cc: Eric Biggers <[email protected]>
---
crypto/Kconfig | 23 +++++
crypto/Makefile | 1 +
crypto/eboiv.c | 269 ++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 293 insertions(+)
create mode 100644 crypto/eboiv.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 094ef56ab7b4..57676a2de01e 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -538,6 +538,29 @@ config CRYPTO_ESSIV
combined with ESSIV the only feasible mode for h/w accelerated
block encryption)

+config CRYPTO_EBOIV
+ tristate "EBOIV support for block encryption"
+ default DM_CRYPT
+ select CRYPTO_CBC
+ help
+ Encrypted byte-offset initialization vector (EBOIV) is an IV
+ generation method that is used in some cases by dm-crypt for
+ supporting the BitLocker volume encryption used by Windows 8
+ and onwards as a backwards compatible version in lieu of XTS
+ support.
+
+ It uses the block encryption key as the symmetric key for a
+ block encryption pass applied to the sector offset of the block.
+ Additional details can be found at
+ https://www.jedec.org/sites/default/files/docs/JESD223C.pdf
+
+ Note that the use of EBOIV is not recommended for new deployments,
+ and so this only needs to be enabled when interoperability with
+ existing encrypted volumes of filesystems is required, or when
+ building for a particular system that requires it (e.g., when
+ interop with BitLocker encrypted volumes of Windows systems
+ prior to Windows 10 is required)
+
comment "Hash modes"

config CRYPTO_CMAC
diff --git a/crypto/Makefile b/crypto/Makefile
index b279483fba50..9f00b2f1ad9c 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -174,6 +174,7 @@ obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o
obj-$(CONFIG_CRYPTO_OFB) += ofb.o
obj-$(CONFIG_CRYPTO_ECC) += ecc.o
obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o
+obj-$(CONFIG_CRYPTO_EBOIV) += eboiv.o
obj-$(CONFIG_CRYPTO_CURVE25519) += curve25519-generic.o

ecdh_generic-y += ecdh.o
diff --git a/crypto/eboiv.c b/crypto/eboiv.c
new file mode 100644
index 000000000000..4c364aad2ff2
--- /dev/null
+++ b/crypto/eboiv.c
@@ -0,0 +1,269 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * EBOIV skcipher template for block encryption
+ *
+ * This template encapsulates the Encrypted byte-offset IV generation algorithm
+ * used in Bitlocker in CBC mode by dm-crypt, which converts the initial vector
+ * for the skcipher used for block encryption, by encrypting it using the same
+ * skcipher key as encryption key. Usually, the input IV is a 64-bit sector
+ * offset (the byte offset of the start of the sector) in LE representation
+ * zero-padded to the size of the IV, but this * is not assumed by this driver.
+ *
+ * The typical use of this template is to instantiate the skcipher
+ * 'eboiv(cbc(aes))', which is the only instantiation used by
+ * dm-crypt for supporting BitLocker AES-CBC mode as specified in
+ * https://www.jedec.org/sites/default/files/docs/JESD223C.pdf
+ *
+ * Copyright (C) 2020 ARM Limited or its affiliates.
+ * Written by Gilad Ben-Yossef <[email protected]>
+ *
+ * Heavily based on:
+ *
+ * ESSIV skcipher and aead template for block encryption
+ * Copyright (c) 2019 Linaro, Ltd. <[email protected]>
+ *
+ * and
+ *
+ * dm-crypt eboiv code
+ * by Milan Broz <[email protected]> and Ard Biesheuvel <[email protected]>
+ *
+ */
+
+#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+
+struct eboiv_instance_ctx {
+ struct crypto_skcipher_spawn skcipher_spawn;
+};
+
+struct eboiv_tfm_ctx {
+ struct crypto_skcipher *skcipher;
+};
+
+struct eboiv_req_ctx {
+ struct skcipher_request *req;
+ bool enc;
+ struct scatterlist src;
+ struct scatterlist dst;
+ u8 iv[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64));
+ struct skcipher_request subreq;
+};
+
+static int eboiv_skcipher_setkey(struct crypto_skcipher *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct eboiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+ crypto_skcipher_clear_flags(tctx->skcipher, CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(tctx->skcipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ return crypto_skcipher_setkey(tctx->skcipher, key, keylen);
+}
+
+static void eboiv_skcipher_done(struct crypto_async_request *areq, int err)
+{
+ struct skcipher_request *req = areq->data;
+
+ if (err == -EINPROGRESS)
+ return;
+
+ skcipher_request_complete(req, err);
+}
+
+static int eboiv_skcipher_do_crypt(struct skcipher_request *req, bool enc)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct eboiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct eboiv_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct skcipher_request *subreq = &req_ctx->subreq;
+
+ skcipher_request_set_tfm(subreq, tctx->skcipher);
+ skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req_ctx->iv);
+ skcipher_request_set_callback(subreq, skcipher_request_flags(req), eboiv_skcipher_done,
+ req);
+
+ return enc ? crypto_skcipher_encrypt(subreq) :
+ crypto_skcipher_decrypt(subreq);
+}
+
+static void eboiv_skcipher_iv_done(struct crypto_async_request *areq, int err)
+{
+ struct eboiv_req_ctx *req_ctx = areq->data;
+ struct skcipher_request *req = req_ctx->req;
+
+ if (!err) {
+
+ err = eboiv_skcipher_do_crypt(req, req_ctx->enc);
+
+ if ((err == -EINPROGRESS) || (err == -EBUSY))
+ return;
+ }
+
+ skcipher_request_complete(req, err);
+}
+
+static int eboiv_skcipher_crypt(struct skcipher_request *req, bool enc)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ const struct eboiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct eboiv_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct skcipher_request *subreq = &req_ctx->subreq;
+ size_t iv_size = crypto_skcipher_ivsize(tfm);
+ int ret;
+
+ /* The supplied IV is assumed to be in sector offset format (though
+ * we don't enforce it) - encrypt it.
+ */
+ sg_init_one(&req_ctx->src, page_address(ZERO_PAGE(0)), iv_size);
+ sg_init_one(&req_ctx->dst, req_ctx->iv, iv_size);
+ skcipher_request_set_tfm(subreq, tctx->skcipher);
+ skcipher_request_set_crypt(subreq, &req_ctx->src, &req_ctx->dst, iv_size, req->iv);
+
+ req_ctx->req = req;
+ req_ctx->enc = enc;
+
+ skcipher_request_set_callback(subreq, skcipher_request_flags(req), eboiv_skcipher_iv_done,
+ req_ctx);
+
+ ret = crypto_skcipher_encrypt(subreq);
+
+ if (ret)
+ return ret;
+
+ return eboiv_skcipher_do_crypt(req, enc);
+
+}
+
+static int eboiv_skcipher_encrypt(struct skcipher_request *req)
+{
+ return eboiv_skcipher_crypt(req, true);
+}
+
+static int eboiv_skcipher_decrypt(struct skcipher_request *req)
+{
+ return eboiv_skcipher_crypt(req, false);
+}
+
+static int eboiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+ struct eboiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+ struct eboiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+ struct crypto_skcipher *skcipher;
+
+ skcipher = crypto_spawn_skcipher(&ictx->skcipher_spawn);
+ if (IS_ERR(skcipher))
+ return PTR_ERR(skcipher);
+
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct eboiv_req_ctx) +
+ crypto_skcipher_reqsize(skcipher));
+
+ tctx->skcipher = skcipher;
+ return 0;
+}
+
+static void eboiv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct eboiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+ crypto_free_skcipher(tctx->skcipher);
+}
+
+static void eboiv_skcipher_free_instance(struct skcipher_instance *inst)
+{
+ struct eboiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+
+ crypto_drop_skcipher(&ictx->skcipher_spawn);
+ kfree(inst);
+}
+
+static int eboiv_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ struct skcipher_instance *inst;
+ struct eboiv_instance_ctx *ictx;
+ struct skcipher_alg *alg;
+ u32 mask;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER, &mask);
+ if (err)
+ return err;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+ if (!inst)
+ return -ENOMEM;
+
+ ictx = skcipher_instance_ctx(inst);
+
+ err = crypto_grab_skcipher(&ictx->skcipher_spawn, skcipher_crypto_instance(inst),
+ crypto_attr_alg_name(tb[1]), 0, mask);
+ if (err)
+ goto err_free_inst;
+ alg = crypto_spawn_skcipher_alg(&ictx->skcipher_spawn);
+
+ err = -EINVAL;
+ if (strncmp(alg->base.cra_name, "cbc(", 4) ||
+ crypto_skcipher_alg_ivsize(alg) != alg->base.cra_blocksize)
+ goto err_free_inst;
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "eboiv(%s)",
+ alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+
+ if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "eboiv(%s)", alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+
+ inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+ inst->alg.base.cra_ctxsize = sizeof(struct eboiv_tfm_ctx);
+ inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
+ inst->alg.base.cra_priority = alg->base.cra_priority;
+
+ inst->alg.setkey = eboiv_skcipher_setkey;
+ inst->alg.encrypt = eboiv_skcipher_encrypt;
+ inst->alg.decrypt = eboiv_skcipher_decrypt;
+ inst->alg.init = eboiv_skcipher_init_tfm;
+ inst->alg.exit = eboiv_skcipher_exit_tfm;
+
+ inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
+ inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+ inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
+
+ inst->free = eboiv_skcipher_free_instance;
+
+ err = skcipher_register_instance(tmpl, inst);
+
+ if (err) {
+err_free_inst:
+ eboiv_skcipher_free_instance(inst);
+ }
+
+ return err;
+}
+
+/* eboiv(cipher_name) */
+static struct crypto_template eboiv_tmpl = {
+ .name = "eboiv",
+ .create = eboiv_create,
+ .module = THIS_MODULE,
+};
+
+static int __init eboiv_module_init(void)
+{
+ return crypto_register_template(&eboiv_tmpl);
+}
+
+static void __exit eboiv_module_exit(void)
+{
+ crypto_unregister_template(&eboiv_tmpl);
+}
+
+subsys_initcall(eboiv_module_init);
+module_exit(eboiv_module_exit);
+
+MODULE_DESCRIPTION("EBOIV (BitLocker) skcipher wrapper for block encryption");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("eboiv");
--
2.28.0

2020-10-29 10:06:24

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v3 2/4] crypto: add eboiv(cbc(aes)) test vectors

Add test vectors for the use of the EBOIV template with cbc(aes)
modes as it is being used by dm-crypt for BitLocker support.

Vectors taken from dm-crypt test suite images.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
crypto/tcrypt.c | 9 ++
crypto/testmgr.c | 6 +
crypto/testmgr.h | 279 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 294 insertions(+)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 234b1adcfbcb..583c027fe8f5 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -2344,6 +2344,15 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
NULL, 0, 16, 8, speed_template_16);
break;

+ case 222:
+ test_acipher_speed("eboiv(cbc(aes))",
+ ENCRYPT, sec, NULL, 0,
+ speed_template_16_32);
+ test_acipher_speed("eboiv(cbc(aes))",
+ DECRYPT, sec, NULL, 0,
+ speed_template_16_32);
+ break;
+
case 300:
if (alg) {
test_hash_speed(alg, sec, generic_hash_speed_template);
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index a64a639eddfa..7d1f409fa807 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4779,6 +4779,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "drbg_pr_sha512",
.fips_allowed = 1,
.test = alg_test_null,
+ }, {
+ .alg = "eboiv(cbc(aes))",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = __VECS(eboiv_aes_cbc_tv_template)
+ }
}, {
.alg = "ecb(aes)",
.test = alg_test_skcipher,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 8c83811c0e35..6429f115289d 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -32736,6 +32736,285 @@ static const struct comp_testvec zstd_decomp_tv_template[] = {
},
};

+/* vectors taken from cryptosetup test suite images */
+static const struct cipher_testvec eboiv_aes_cbc_tv_template[] = {
+ {
+ /* cbc-aes-128 with 512b sector size, 0th sector */
+ .key = "\x6c\x96\xf8\x2a\x94\x2e\x87\x5f"
+ "\x02\x9c\x3d\xd9\xe4\x35\x17\x73",
+ .klen = 16,
+ .iv = "\x00\x50\x1a\x02\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .ptext = "\xeb\x52\x90\x4e\x54\x46\x53\x20"
+ "\x20\x20\x20\x00\x02\x08\x00\x00"
+ "\x00\x00\x00\x00\x00\xf8\x00\x00"
+ "\x3f\x00\xff\x00\x00\x08\x00\x00"
+ "\x00\x00\x00\x00\x80\x00\x00\x00"
+ "\xff\x1f\x03\x00\x00\x00\x00\x00"
+ "\x55\x21\x00\x00\x00\x00\x00\x00"
+ "\x02\x00\x00\x00\x00\x00\x00\x00"
+ "\xf6\x00\x00\x00\x01\x00\x00\x00"
+ "\x13\x1e\xf1\xd4\x56\xf1\xd4\xf2"
+ "\x00\x00\x00\x00\xfa\x33\xc0\x8e"
+ "\xd0\xbc\x00\x7c\xfb\x68\xc0\x07"
+ "\x1f\x1e\x68\x66\x00\xcb\x88\x16"
+ "\x0e\x00\x66\x81\x3e\x03\x00\x4e"
+ "\x54\x46\x53\x75\x15\xb4\x41\xbb"
+ "\xaa\x55\xcd\x13\x72\x0c\x81\xfb"
+ "\x55\xaa\x75\x06\xf7\xc1\x01\x00"
+ "\x75\x03\xe9\xdd\x00\x1e\x83\xec"
+ "\x18\x68\x1a\x00\xb4\x48\x8a\x16"
+ "\x0e\x00\x8b\xf4\x16\x1f\xcd\x13"
+ "\x9f\x83\xc4\x18\x9e\x58\x1f\x72"
+ "\xe1\x3b\x06\x0b\x00\x75\xdb\xa3"
+ "\x0f\x00\xc1\x2e\x0f\x00\x04\x1e"
+ "\x5a\x33\xdb\xb9\x00\x20\x2b\xc8"
+ "\x66\xff\x06\x11\x00\x03\x16\x0f"
+ "\x00\x8e\xc2\xff\x06\x16\x00\xe8"
+ "\x4b\x00\x2b\xc8\x77\xef\xb8\x00"
+ "\xbb\xcd\x1a\x66\x23\xc0\x75\x2d"
+ "\x66\x81\xfb\x54\x43\x50\x41\x75"
+ "\x24\x81\xf9\x02\x01\x72\x1e\x16"
+ "\x68\x07\xbb\x16\x68\x52\x11\x16"
+ "\x68\x09\x00\x66\x53\x66\x53\x66"
+ "\x55\x16\x16\x16\x68\xb8\x01\x66"
+ "\x61\x0e\x07\xcd\x1a\x33\xc0\xbf"
+ "\x0a\x13\xb9\xf6\x0c\xfc\xf3\xaa"
+ "\xe9\xfe\x01\x90\x90\x66\x60\x1e"
+ "\x06\x66\xa1\x11\x00\x66\x03\x06"
+ "\x1c\x00\x1e\x66\x68\x00\x00\x00"
+ "\x00\x66\x50\x06\x53\x68\x01\x00"
+ "\x68\x10\x00\xb4\x42\x8a\x16\x0e"
+ "\x00\x16\x1f\x8b\xf4\xcd\x13\x66"
+ "\x59\x5b\x5a\x66\x59\x66\x59\x1f"
+ "\x0f\x82\x16\x00\x66\xff\x06\x11"
+ "\x00\x03\x16\x0f\x00\x8e\xc2\xff"
+ "\x0e\x16\x00\x75\xbc\x07\x1f\x66"
+ "\x61\xc3\xa1\xf6\x01\xe8\x09\x00"
+ "\xa1\xfa\x01\xe8\x03\x00\xf4\xeb"
+ "\xfd\x8b\xf0\xac\x3c\x00\x74\x09"
+ "\xb4\x0e\xbb\x07\x00\xcd\x10\xeb"
+ "\xf2\xc3\x0d\x0a\x41\x20\x64\x69"
+ "\x73\x6b\x20\x72\x65\x61\x64\x20"
+ "\x65\x72\x72\x6f\x72\x20\x6f\x63"
+ "\x63\x75\x72\x72\x65\x64\x00\x0d"
+ "\x0a\x42\x4f\x4f\x54\x4d\x47\x52"
+ "\x20\x69\x73\x20\x63\x6f\x6d\x70"
+ "\x72\x65\x73\x73\x65\x64\x00\x0d"
+ "\x0a\x50\x72\x65\x73\x73\x20\x43"
+ "\x74\x72\x6c\x2b\x41\x6c\x74\x2b"
+ "\x44\x65\x6c\x20\x74\x6f\x20\x72"
+ "\x65\x73\x74\x61\x72\x74\x0d\x0a"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x8a\x01"
+ "\xa7\x01\xbf\x01\x00\x00\x55\xaa",
+ .ctext = "\xae\x81\x3f\xac\x8a\xee\x8d\x6e"
+ "\xc2\x6b\xff\x87\x65\xe8\xfd\x0d"
+ "\xdc\x99\x61\xd3\x01\xa4\x26\x05"
+ "\x7e\xf1\x1e\x61\xde\x84\x1f\x21"
+ "\xf9\x27\x10\x31\x64\xdd\xee\xd2"
+ "\xce\xaf\xb6\x1a\x11\xcf\xde\x2d"
+ "\x1e\xd1\x0d\x42\x51\x89\x4c\x01"
+ "\x6f\xae\x26\x65\xff\x12\xe3\xa2"
+ "\xa4\x8a\x0c\x74\xed\xfa\xab\x05"
+ "\x7d\x6e\x58\x24\x92\x2e\xc2\x33"
+ "\xa2\x71\x3b\xec\x4e\x3b\x17\xde"
+ "\xfb\xc0\xaa\x04\x72\xe7\x76\x3d"
+ "\xf2\x3d\xc4\x50\x3d\x5e\xac\x5d"
+ "\x95\x4d\x02\xbc\xe1\xed\x46\x25"
+ "\xe5\x8e\x51\x36\x31\x6d\xb4\x2f"
+ "\x04\x09\x18\xa8\x66\x5f\xbc\xb2"
+ "\x39\x20\x79\xd7\x3c\xc3\xf7\x3e"
+ "\x3e\xe6\x31\xc8\x9a\xd8\xdc\xcf"
+ "\xb7\x49\x0e\x5b\x9e\xc3\xd4\x5d"
+ "\x2c\xa2\xaf\x7c\x9e\xb0\x5d\x45"
+ "\x18\x32\xbf\x50\xc8\x47\xc1\x55"
+ "\x1c\xd8\xf9\xc8\xdd\x97\x46\xba"
+ "\x3c\x8c\xbd\x14\x07\xf7\x14\xa7"
+ "\x5e\x44\xc3\x41\x86\x37\x46\xbc"
+ "\x45\xcb\xaf\xff\xf4\xba\xdf\xe5"
+ "\xfa\x62\xa6\x04\x74\x90\x24\x38"
+ "\x13\x37\x9f\xe5\x73\x76\x4e\x4c"
+ "\x1b\x57\xe5\x85\x14\xa2\x24\x24"
+ "\xb1\xc7\xa4\x47\xfa\x5f\x4d\x32"
+ "\xfa\x43\x0c\x71\x49\x45\x2a\xcb"
+ "\x5d\xd5\xa4\x75\x1d\x53\x21\x5c"
+ "\xa3\x0b\xd4\x23\xba\x20\x10\x00"
+ "\x27\x2b\x9f\xbe\x49\x0b\x2c\xe9"
+ "\x2f\x61\x19\x9f\x88\x06\x94\xb4"
+ "\x35\xdf\x15\xff\xc9\x76\x1b\x18"
+ "\xf4\x0d\xb5\x41\x53\x51\x4e\x41"
+ "\xf3\x1a\x60\x8f\x2d\x48\x60\x05"
+ "\xff\xb7\x5c\x46\x77\xd6\x29\xad"
+ "\x51\x10\x51\x7f\x97\x44\x4c\xa3"
+ "\xf5\x71\x95\x9c\x9d\x82\xfb\x3e"
+ "\x1c\xfd\x88\x99\xe3\x0c\xdf\xad"
+ "\xf3\xba\x0e\x44\x4a\x24\x8e\xf7"
+ "\x5d\x44\x41\x78\xc0\x0c\x80\xe2"
+ "\xaa\x52\xad\x65\xea\x09\x15\x52"
+ "\x67\x0d\xa2\x39\x74\xde\x4f\x64"
+ "\x1c\x5b\x58\x55\x2a\x30\xef\x8f"
+ "\xc1\x53\x4d\xe4\x3d\xae\x4c\x04"
+ "\xf5\x1c\x04\x47\xad\x8e\xdf\xd8"
+ "\x8f\xe2\x64\xcc\x37\xdd\xf1\x4f"
+ "\xab\xc2\x53\x4f\x7d\xe3\xb1\x57"
+ "\xfe\x3d\x5c\xea\xfb\xe0\x1b\xad"
+ "\xc3\x0f\xc5\xed\x42\xb1\xac\x88"
+ "\x96\x96\x7c\xd3\xe4\x44\xd1\xe6"
+ "\x88\xd2\xae\xdd\x7a\xd4\x20\x53"
+ "\xb2\x4f\x0f\x46\xe3\xf0\x08\x3b"
+ "\x57\xc3\x93\xe8\x47\x96\xae\xe1"
+ "\xfe\x85\x37\xdc\xd1\x77\xce\x12"
+ "\x6c\xa8\x73\x42\x84\x12\x0e\x95"
+ "\x20\x02\xdc\xfd\x5e\x26\x70\x2c"
+ "\x96\xa1\x72\x5d\x88\xdb\xf3\xb2"
+ "\x8c\xc0\xf9\x02\xea\x93\x7a\x62"
+ "\x64\x42\xbb\xe3\x50\x7d\xdd\x07"
+ "\xd0\xe6\x1a\x11\x82\xa3\xc5\x34"
+ "\x3b\x1f\x3a\x8a\xbe\xd4\xea\x23",
+ .len = 512,
+ }, {
+ /* cbc-aes-256 with 512b sector size, nth sector */
+ .key = "\x9c\x3c\x73\xa4\xad\x15\xac\xcc"
+ "\xc5\x02\x0c\x41\x00\xf5\xc2\x70"
+ "\x83\x66\x49\x65\x07\x9c\xf6\xb9"
+ "\xde\x18\x54\xa1\x76\xf0\x66\xee",
+ .klen = 32,
+ .iv = "\x00\x50\x1a\x02\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .ptext = "\xeb\x52\x90\x4e\x54\x46\x53\x20"
+ "\x20\x20\x20\x00\x02\x08\x00\x00"
+ "\x00\x00\x00\x00\x00\xf8\x00\x00"
+ "\x3f\x00\xff\x00\x00\x28\x03\x00"
+ "\x00\x00\x00\x00\x80\x00\x00\x00"
+ "\xff\x1f\x03\x00\x00\x00\x00\x00"
+ "\x55\x21\x00\x00\x00\x00\x00\x00"
+ "\x02\x00\x00\x00\x00\x00\x00\x00"
+ "\xf6\x00\x00\x00\x01\x00\x00\x00"
+ "\x75\xf2\x02\xc0\x10\x03\xc0\x9a"
+ "\x00\x00\x00\x00\xfa\x33\xc0\x8e"
+ "\xd0\xbc\x00\x7c\xfb\x68\xc0\x07"
+ "\x1f\x1e\x68\x66\x00\xcb\x88\x16"
+ "\x0e\x00\x66\x81\x3e\x03\x00\x4e"
+ "\x54\x46\x53\x75\x15\xb4\x41\xbb"
+ "\xaa\x55\xcd\x13\x72\x0c\x81\xfb"
+ "\x55\xaa\x75\x06\xf7\xc1\x01\x00"
+ "\x75\x03\xe9\xdd\x00\x1e\x83\xec"
+ "\x18\x68\x1a\x00\xb4\x48\x8a\x16"
+ "\x0e\x00\x8b\xf4\x16\x1f\xcd\x13"
+ "\x9f\x83\xc4\x18\x9e\x58\x1f\x72"
+ "\xe1\x3b\x06\x0b\x00\x75\xdb\xa3"
+ "\x0f\x00\xc1\x2e\x0f\x00\x04\x1e"
+ "\x5a\x33\xdb\xb9\x00\x20\x2b\xc8"
+ "\x66\xff\x06\x11\x00\x03\x16\x0f"
+ "\x00\x8e\xc2\xff\x06\x16\x00\xe8"
+ "\x4b\x00\x2b\xc8\x77\xef\xb8\x00"
+ "\xbb\xcd\x1a\x66\x23\xc0\x75\x2d"
+ "\x66\x81\xfb\x54\x43\x50\x41\x75"
+ "\x24\x81\xf9\x02\x01\x72\x1e\x16"
+ "\x68\x07\xbb\x16\x68\x52\x11\x16"
+ "\x68\x09\x00\x66\x53\x66\x53\x66"
+ "\x55\x16\x16\x16\x68\xb8\x01\x66"
+ "\x61\x0e\x07\xcd\x1a\x33\xc0\xbf"
+ "\x0a\x13\xb9\xf6\x0c\xfc\xf3\xaa"
+ "\xe9\xfe\x01\x90\x90\x66\x60\x1e"
+ "\x06\x66\xa1\x11\x00\x66\x03\x06"
+ "\x1c\x00\x1e\x66\x68\x00\x00\x00"
+ "\x00\x66\x50\x06\x53\x68\x01\x00"
+ "\x68\x10\x00\xb4\x42\x8a\x16\x0e"
+ "\x00\x16\x1f\x8b\xf4\xcd\x13\x66"
+ "\x59\x5b\x5a\x66\x59\x66\x59\x1f"
+ "\x0f\x82\x16\x00\x66\xff\x06\x11"
+ "\x00\x03\x16\x0f\x00\x8e\xc2\xff"
+ "\x0e\x16\x00\x75\xbc\x07\x1f\x66"
+ "\x61\xc3\xa1\xf6\x01\xe8\x09\x00"
+ "\xa1\xfa\x01\xe8\x03\x00\xf4\xeb"
+ "\xfd\x8b\xf0\xac\x3c\x00\x74\x09"
+ "\xb4\x0e\xbb\x07\x00\xcd\x10\xeb"
+ "\xf2\xc3\x0d\x0a\x41\x20\x64\x69"
+ "\x73\x6b\x20\x72\x65\x61\x64\x20"
+ "\x65\x72\x72\x6f\x72\x20\x6f\x63"
+ "\x63\x75\x72\x72\x65\x64\x00\x0d"
+ "\x0a\x42\x4f\x4f\x54\x4d\x47\x52"
+ "\x20\x69\x73\x20\x63\x6f\x6d\x70"
+ "\x72\x65\x73\x73\x65\x64\x00\x0d"
+ "\x0a\x50\x72\x65\x73\x73\x20\x43"
+ "\x74\x72\x6c\x2b\x41\x6c\x74\x2b"
+ "\x44\x65\x6c\x20\x74\x6f\x20\x72"
+ "\x65\x73\x74\x61\x72\x74\x0d\x0a"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x8a\x01"
+ "\xa7\x01\xbf\x01\x00\x00\x55\xaa",
+ .ctext = "\x95\xec\x37\xc9\x2c\xf8\x91\x1d"
+ "\xdd\x17\xa5\x17\x8a\x56\x34\xec"
+ "\xba\x14\x06\x20\xc1\xd2\xb3\xda"
+ "\xf3\xfc\x05\x32\x93\x83\x3d\xbb"
+ "\x41\x30\xce\xc9\x2e\x92\x0d\x5a"
+ "\xb4\xab\x59\x21\xda\xbd\x1e\xfb"
+ "\xbf\xf9\x6d\x8d\x91\x85\x02\xdc"
+ "\xe4\xae\xb6\xf0\x26\x9f\xa2\xb0"
+ "\xb2\x8b\x52\xfe\xd1\x19\x33\x8f"
+ "\x9f\x4e\x39\x83\x6f\x89\x6f\x13"
+ "\xb6\xbd\x14\xfb\xff\x46\xf1\xf0"
+ "\x87\x8b\x2b\x92\x3d\x16\x57\x84"
+ "\x07\x6e\x5a\xd0\x03\xab\x2d\x22"
+ "\x4b\xf4\xc1\xbd\xe4\xb8\x6a\x72"
+ "\x76\x9a\xc4\xa5\x11\xb0\x56\xa9"
+ "\xee\xe4\xdd\x19\xc3\x79\x57\x61"
+ "\x03\x07\x16\x15\xc8\x17\x23\xcd"
+ "\xb7\x24\xa8\x06\x24\x8f\x26\x68"
+ "\xf8\x54\xce\xfe\xc7\xc0\x11\x75"
+ "\xc8\xa6\xc7\xf4\x4c\x49\x3c\x65"
+ "\x3e\x18\xbf\x16\xac\xd4\xc1\x97"
+ "\x7b\x02\x04\x78\x04\xf4\x14\x15"
+ "\x4c\x60\xbc\x22\xd6\xb1\x8a\x51"
+ "\xd5\x40\xe7\x9c\xf7\xd0\x63\xe0"
+ "\xe9\x84\x0d\xb8\x1b\x45\x69\x11"
+ "\x70\xfb\xeb\x0d\x07\xeb\x25\xaf"
+ "\x6e\x5c\x90\xae\x9a\x75\x6d\xbf"
+ "\x4c\xac\x60\xe6\xdb\x71\x38\xdc"
+ "\xa5\xaa\x3e\x75\xf4\xe5\x43\x59"
+ "\xe9\x86\x46\x9f\xc1\x5c\x77\x3e"
+ "\xeb\x8d\x31\xd3\x4b\x85\x68\xb8"
+ "\x10\x1f\x58\x27\xe8\x60\xeb\x65"
+ "\xe2\xda\x03\xbb\x06\x4e\x11\x44"
+ "\x1d\xd1\xe9\xa7\xae\x37\x48\x73"
+ "\xd1\xae\xa7\xae\x24\x9a\xb2\x62"
+ "\xb7\x12\x7f\x89\x7f\x98\x9b\x07"
+ "\xfe\xb6\xc6\x1a\x25\xc4\x78\xac"
+ "\x5b\x31\x30\x22\xf2\x89\x3f\x4d"
+ "\x8c\x4c\xfb\xe7\xb0\x7c\x92\x28"
+ "\x65\xb6\x8d\x04\x47\x48\xc3\xb4"
+ "\x77\xac\xe2\xa4\xe0\x08\x11\x7e"
+ "\x53\x08\xde\x4c\xec\xdf\x9e\x5b"
+ "\xbe\x24\x7a\x08\x6b\x53\xec\x29"
+ "\x96\x61\x30\xd2\xcb\x72\x80\x5b"
+ "\xba\x1f\xcf\xaa\x46\x6f\x5b\xe3"
+ "\xd5\x32\xb9\x7b\xe0\x69\x2f\xa2"
+ "\x0b\xb2\x43\xf1\x3e\x30\xdd\x76"
+ "\x73\xe1\xe7\x28\xac\x91\xa8\x9e"
+ "\x6e\x77\xac\x6d\x0b\xbc\x52\x98"
+ "\x65\x36\x2b\x10\x9b\x40\xe0\x1e"
+ "\x7d\x5b\xfb\xe3\x9d\xa7\x93\xff"
+ "\xfa\xe3\x42\xc5\x8e\x2c\xa4\x2a"
+ "\x5d\x0b\x18\xec\xfb\xcf\x18\xaf"
+ "\x06\x30\xf1\xa3\x9d\x51\xe3\xc3"
+ "\x2f\x10\x06\x3e\x74\x23\xbb\x14"
+ "\xfe\x05\x08\xf9\xd1\xb2\x47\x6a"
+ "\x17\xd3\x1b\x9d\xac\xc9\x55\x1f"
+ "\xde\x1f\x51\x03\x96\xe3\x64\xe0"
+ "\xd2\xd6\x01\xc2\x6c\x79\x07\x4f"
+ "\x08\x03\x78\x23\x16\xd2\x23\xec"
+ "\x3f\x5d\xb6\xdc\xa5\xea\xbf\x8f"
+ "\x47\xc0\x34\x3f\x61\x55\x30\x03"
+ "\x36\xd9\xbf\xbb\x39\x7b\x90\xfb"
+ "\x8d\xf1\x47\xa2\x03\x25\x3f\x92",
+ .len = 512,
+ },
+};
+
/* based on aes_cbc_tv_template */
static const struct cipher_testvec essiv_aes_cbc_tv_template[] = {
{
--
2.28.0

2020-10-29 10:06:32

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v3 3/4] dm crypt: switch to EBOIV crypto API template

Replace the explicit EBOIV handling in the dm-crypt driver with calls
into the crypto API, which now possesses the capability to perform
this processing within the crypto subsystem.

Signed-off-by: Gilad Ben-Yossef <[email protected]>

---
drivers/md/dm-crypt.c | 61 ++++++++++++++-----------------------------
1 file changed, 19 insertions(+), 42 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 148960721254..86b7c7ee3225 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -716,47 +716,18 @@ static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv,
return 0;
}

-static int crypt_iv_eboiv_ctr(struct crypt_config *cc, struct dm_target *ti,
- const char *opts)
-{
- if (crypt_integrity_aead(cc)) {
- ti->error = "AEAD transforms not supported for EBOIV";
- return -EINVAL;
- }
-
- if (crypto_skcipher_blocksize(any_tfm(cc)) != cc->iv_size) {
- ti->error = "Block size of EBOIV cipher does "
- "not match IV size of block cipher";
- return -EINVAL;
- }
-
- return 0;
-}
-
static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv,
struct dm_crypt_request *dmreq)
{
- u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64));
- struct skcipher_request *req;
- struct scatterlist src, dst;
- struct crypto_wait wait;
- int err;
-
- req = skcipher_request_alloc(any_tfm(cc), GFP_NOIO);
- if (!req)
- return -ENOMEM;
-
- memset(buf, 0, cc->iv_size);
- *(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size);

- sg_init_one(&src, page_address(ZERO_PAGE(0)), cc->iv_size);
- sg_init_one(&dst, iv, cc->iv_size);
- skcipher_request_set_crypt(req, &src, &dst, cc->iv_size, buf);
- skcipher_request_set_callback(req, 0, crypto_req_done, &wait);
- err = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
- skcipher_request_free(req);
+ /*
+ * ESSIV encryption of the IV is handled by the crypto API,
+ * so compute and pass the sector offset here.
+ */
+ memset(iv, 0, cc->iv_size);
+ *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector * cc->sector_size);

- return err;
+ return 0;
}

static void crypt_iv_elephant_dtr(struct crypt_config *cc)
@@ -771,18 +742,14 @@ static int crypt_iv_elephant_ctr(struct crypt_config *cc, struct dm_target *ti,
const char *opts)
{
struct iv_elephant_private *elephant = &cc->iv_gen_private.elephant;
- int r;
+ int r = 0;

elephant->tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
if (IS_ERR(elephant->tfm)) {
r = PTR_ERR(elephant->tfm);
elephant->tfm = NULL;
- return r;
}

- r = crypt_iv_eboiv_ctr(cc, ti, NULL);
- if (r)
- crypt_iv_elephant_dtr(cc);
return r;
}

@@ -1092,7 +1059,6 @@ static struct crypt_iv_operations crypt_iv_random_ops = {
};

static struct crypt_iv_operations crypt_iv_eboiv_ops = {
- .ctr = crypt_iv_eboiv_ctr,
.generator = crypt_iv_eboiv_gen
};

@@ -2739,6 +2705,15 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
cipher_api = buf;
}

+ if (*ivmode && (!strcmp(*ivmode, "eboiv") || !strcmp(*ivmode, "elephant"))) {
+ ret = snprintf(buf, CRYPTO_MAX_ALG_NAME, "eboiv(%s)", cipher_api);
+ if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) {
+ ti->error = "Cannot allocate cipher string";
+ return -ENOMEM;
+ }
+ cipher_api = buf;
+ }
+
cc->key_parts = cc->tfms_count;

/* Allocate cipher */
@@ -2817,6 +2792,8 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
}
ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
"essiv(%s(%s),%s)", chainmode, cipher, *ivopts);
+ } else if (*ivmode && (!strcmp(*ivmode, "eboiv") || !strcmp(*ivmode, "elephant"))) {
+ ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, "eboiv(%s(%s))", chainmode, cipher);
} else {
ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
"%s(%s)", chainmode, cipher);
--
2.28.0

2020-10-29 10:06:36

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v3 4/4] crypto: ccree: re-introduce ccree eboiv support

BitLocker eboiv support, which was removed in
commit 1d8b41ff6991 ("crypto: ccree - remove bitlocker cipher")
is reintroduced based on the crypto API new support for
eboiv.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
Fixes: 1d8b41ff6991 ("crypto: ccree - remove bitlocker cipher")
---
drivers/crypto/ccree/cc_cipher.c | 132 +++++++++++++++++++--------
drivers/crypto/ccree/cc_crypto_ctx.h | 1 +
2 files changed, 96 insertions(+), 37 deletions(-)

diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
index dafa6577a845..a13ae60189ed 100644
--- a/drivers/crypto/ccree/cc_cipher.c
+++ b/drivers/crypto/ccree/cc_cipher.c
@@ -74,10 +74,14 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
case S_DIN_to_AES:
switch (size) {
case CC_AES_128_BIT_KEY_SIZE:
- case CC_AES_192_BIT_KEY_SIZE:
if (ctx_p->cipher_mode != DRV_CIPHER_XTS)
return 0;
break;
+ case CC_AES_192_BIT_KEY_SIZE:
+ if (ctx_p->cipher_mode != DRV_CIPHER_XTS &&
+ ctx_p->cipher_mode != DRV_CIPHER_BITLOCKER)
+ return 0;
+ break;
case CC_AES_256_BIT_KEY_SIZE:
return 0;
case (CC_AES_192_BIT_KEY_SIZE * 2):
@@ -120,6 +124,7 @@ static int validate_data_size(struct cc_cipher_ctx *ctx_p,
case DRV_CIPHER_ECB:
case DRV_CIPHER_CBC:
case DRV_CIPHER_ESSIV:
+ case DRV_CIPHER_BITLOCKER:
if (IS_ALIGNED(size, AES_BLOCK_SIZE))
return 0;
break;
@@ -345,7 +350,8 @@ static int cc_cipher_sethkey(struct crypto_skcipher *sktfm, const u8 *key,
}

if (ctx_p->cipher_mode == DRV_CIPHER_XTS ||
- ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+ ctx_p->cipher_mode == DRV_CIPHER_ESSIV ||
+ ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER) {
if (hki.hw_key1 == hki.hw_key2) {
dev_err(dev, "Illegal hw key numbers (%d,%d)\n",
hki.hw_key1, hki.hw_key2);
@@ -543,6 +549,7 @@ static void cc_setup_readiv_desc(struct crypto_tfm *tfm,
break;
case DRV_CIPHER_XTS:
case DRV_CIPHER_ESSIV:
+ case DRV_CIPHER_BITLOCKER:
/* IV */
hw_desc_init(&desc[*seq_size]);
set_setup_mode(&desc[*seq_size], SETUP_WRITE_STATE1);
@@ -597,6 +604,7 @@ static void cc_setup_state_desc(struct crypto_tfm *tfm,
break;
case DRV_CIPHER_XTS:
case DRV_CIPHER_ESSIV:
+ case DRV_CIPHER_BITLOCKER:
break;
default:
dev_err(dev, "Unsupported cipher mode (%d)\n", cipher_mode);
@@ -616,56 +624,70 @@ static void cc_setup_xex_state_desc(struct crypto_tfm *tfm,
int flow_mode = ctx_p->flow_mode;
int direction = req_ctx->gen_ctx.op_type;
dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr;
- unsigned int key_len = (ctx_p->keylen / 2);
dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr;
- unsigned int key_offset = key_len;
+ unsigned int key_len;
+ unsigned int key_offset;

switch (cipher_mode) {
case DRV_CIPHER_ECB:
- break;
case DRV_CIPHER_CBC:
case DRV_CIPHER_CBC_CTS:
case DRV_CIPHER_CTR:
case DRV_CIPHER_OFB:
- break;
- case DRV_CIPHER_XTS:
- case DRV_CIPHER_ESSIV:
+ /* No secondary key for these ciphers, so just return */
+ return;

- if (cipher_mode == DRV_CIPHER_ESSIV)
- key_len = SHA256_DIGEST_SIZE;
+ case DRV_CIPHER_XTS:
+ /* Secondary key is same size as primary key and stored after primary key */
+ key_len = ctx_p->keylen / 2;
+ key_offset = key_len;
+ break;

- /* load XEX key */
- hw_desc_init(&desc[*seq_size]);
- set_cipher_mode(&desc[*seq_size], cipher_mode);
- set_cipher_config0(&desc[*seq_size], direction);
- if (cc_key_type(tfm) == CC_HW_PROTECTED_KEY) {
- set_hw_crypto_key(&desc[*seq_size],
- ctx_p->hw.key2_slot);
- } else {
- set_din_type(&desc[*seq_size], DMA_DLLI,
- (key_dma_addr + key_offset),
- key_len, NS_BIT);
- }
- set_xex_data_unit_size(&desc[*seq_size], nbytes);
- set_flow_mode(&desc[*seq_size], S_DIN_to_AES2);
- set_key_size_aes(&desc[*seq_size], key_len);
- set_setup_mode(&desc[*seq_size], SETUP_LOAD_XEX_KEY);
- (*seq_size)++;
+ case DRV_CIPHER_ESSIV:
+ /* Secondary key is a digest of primary key and stored after primary key */
+ key_len = SHA256_DIGEST_SIZE;
+ key_offset = ctx_p->keylen / 2;
+ break;

- /* Load IV */
- hw_desc_init(&desc[*seq_size]);
- set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE1);
- set_cipher_mode(&desc[*seq_size], cipher_mode);
- set_cipher_config0(&desc[*seq_size], direction);
- set_key_size_aes(&desc[*seq_size], key_len);
- set_flow_mode(&desc[*seq_size], flow_mode);
- set_din_type(&desc[*seq_size], DMA_DLLI, iv_dma_addr,
- CC_AES_BLOCK_SIZE, NS_BIT);
- (*seq_size)++;
+ case DRV_CIPHER_BITLOCKER:
+ /* Secondary key is same as primary key */
+ key_len = ctx_p->keylen;
+ key_offset = 0;
break;
+
default:
+ /* This should never really happen */
dev_err(dev, "Unsupported cipher mode (%d)\n", cipher_mode);
+ return;
+ }
+
+ /* load XEX key */
+ hw_desc_init(&desc[*seq_size]);
+ set_cipher_mode(&desc[*seq_size], cipher_mode);
+ set_cipher_config0(&desc[*seq_size], direction);
+ if (cc_key_type(tfm) == CC_HW_PROTECTED_KEY) {
+ set_hw_crypto_key(&desc[*seq_size],
+ ctx_p->hw.key2_slot);
+ } else {
+ set_din_type(&desc[*seq_size], DMA_DLLI,
+ (key_dma_addr + key_offset),
+ key_len, NS_BIT);
}
+ set_xex_data_unit_size(&desc[*seq_size], nbytes);
+ set_flow_mode(&desc[*seq_size], S_DIN_to_AES2);
+ set_key_size_aes(&desc[*seq_size], key_len);
+ set_setup_mode(&desc[*seq_size], SETUP_LOAD_XEX_KEY);
+ (*seq_size)++;
+
+ /* Load IV */
+ hw_desc_init(&desc[*seq_size]);
+ set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE1);
+ set_cipher_mode(&desc[*seq_size], cipher_mode);
+ set_cipher_config0(&desc[*seq_size], direction);
+ set_key_size_aes(&desc[*seq_size], key_len);
+ set_flow_mode(&desc[*seq_size], flow_mode);
+ set_din_type(&desc[*seq_size], DMA_DLLI, iv_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+ (*seq_size)++;
}

static int cc_out_flow_mode(struct cc_cipher_ctx *ctx_p)
@@ -702,6 +724,7 @@ static void cc_setup_key_desc(struct crypto_tfm *tfm,
case DRV_CIPHER_CTR:
case DRV_CIPHER_OFB:
case DRV_CIPHER_ECB:
+ case DRV_CIPHER_BITLOCKER:
/* Load key */
hw_desc_init(&desc[*seq_size]);
set_cipher_mode(&desc[*seq_size], cipher_mode);
@@ -1040,6 +1063,24 @@ static const struct cc_alg_template skcipher_algs[] = {
.std_body = CC_STD_NIST,
.sec_func = true,
},
+ {
+ .name = "eboiv(cbc(paes))",
+ .driver_name = "eboiv-cbc-paes-ccree",
+ .blocksize = AES_BLOCK_SIZE,
+ .template_skcipher = {
+ .setkey = cc_cipher_sethkey,
+ .encrypt = cc_cipher_encrypt,
+ .decrypt = cc_cipher_decrypt,
+ .min_keysize = CC_HW_KEY_SIZE,
+ .max_keysize = CC_HW_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_BITLOCKER,
+ .flow_mode = S_DIN_to_AES,
+ .min_hw_rev = CC_HW_REV_712,
+ .std_body = CC_STD_NIST,
+ .sec_func = true,
+ },
{
.name = "ecb(paes)",
.driver_name = "ecb-paes-ccree",
@@ -1168,6 +1209,23 @@ static const struct cc_alg_template skcipher_algs[] = {
.min_hw_rev = CC_HW_REV_712,
.std_body = CC_STD_NIST,
},
+ {
+ .name = "eboiv(cbc(aes))",
+ .driver_name = "eboiv-cbc-aes-ccree",
+ .blocksize = AES_BLOCK_SIZE,
+ .template_skcipher = {
+ .setkey = cc_cipher_setkey,
+ .encrypt = cc_cipher_encrypt,
+ .decrypt = cc_cipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_BITLOCKER,
+ .flow_mode = S_DIN_to_AES,
+ .min_hw_rev = CC_HW_REV_712,
+ .std_body = CC_STD_NIST,
+ },
{
.name = "ecb(aes)",
.driver_name = "ecb-aes-ccree",
diff --git a/drivers/crypto/ccree/cc_crypto_ctx.h b/drivers/crypto/ccree/cc_crypto_ctx.h
index bd9a1c0896b3..ccf960a0d989 100644
--- a/drivers/crypto/ccree/cc_crypto_ctx.h
+++ b/drivers/crypto/ccree/cc_crypto_ctx.h
@@ -108,6 +108,7 @@ enum drv_cipher_mode {
DRV_CIPHER_CBC_CTS = 11,
DRV_CIPHER_GCTR = 12,
DRV_CIPHER_ESSIV = 13,
+ DRV_CIPHER_BITLOCKER = 14,
DRV_CIPHER_RESERVE32B = S32_MAX
};

--
2.28.0

2020-10-30 10:34:25

by Milan Broz

[permalink] [raw]
Subject: Re: [PATCH v3 1/4] crypto: add eboiv as a crypto API template

On 29/10/2020 11:05, Gilad Ben-Yossef wrote:
>
> +config CRYPTO_EBOIV
> + tristate "EBOIV support for block encryption"
> + default DM_CRYPT
> + select CRYPTO_CBC
> + help
> + Encrypted byte-offset initialization vector (EBOIV) is an IV
> + generation method that is used in some cases by dm-crypt for
> + supporting the BitLocker volume encryption used by Windows 8
> + and onwards as a backwards compatible version in lieu of XTS
> + support.
> +
> + It uses the block encryption key as the symmetric key for a
> + block encryption pass applied to the sector offset of the block.
> + Additional details can be found at
> + https://www.jedec.org/sites/default/files/docs/JESD223C.pdf

This page is not available. Are you sure this is the proper documentation?

I think the only description we used (for dm-crypt) was original Ferguson's Bitlocker doc:
https://download.microsoft.com/download/0/2/3/0238acaf-d3bf-4a6d-b3d6-0a0be4bbb36e/bitlockercipher200608.pdf

IIRC EBOIV was a shortcut I added to dm-crypt because we found no official terminology for this IV.
And after lunchtime, nobody invented anything better, so it stayed as it is now :-)

Milan

2020-12-02 08:31:20

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v3 0/4] crypto: switch to crypto API for EBOIV generation

Hi,

On Thu, Oct 29, 2020 at 12:05 PM Gilad Ben-Yossef <[email protected]> wrote:
>
>
> This series creates an EBOIV template that produces a skcipher
> transform which passes through all operations to the skcipher, while
> using the same skcipher and key to encrypt the input IV, which is
> assumed to be a sector offset, although this is not enforced.

I hope I didn't miss anything, but it seems I didn't get an ACK, NACK
or request of changes to the latest iteration.

Any feedback is very much welcome.

Thanks,
Gilad


--
Gilad Ben-Yossef
Chief Coffee Drinker

values of β will give rise to dom!