2019-10-12 20:27:01

by Eric Biggers

[permalink] [raw]
Subject: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

This series converts the glue code for the S390 CPACF implementations of
AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
"skcipher" API. This is needed in order for the blkcipher API to be
removed.

I've compiled this patchset, and the conversion is very similar to that
which has been done for many other crypto drivers. But I don't have the
hardware to test it, nor is S390 CPACF supported by QEMU. So I really
need someone with the hardware to test it. You can do so by setting:

CONFIG_CRYPTO_HW=y
CONFIG_ZCRYPT=y
CONFIG_PKEY=y
CONFIG_CRYPTO_AES_S390=y
CONFIG_CRYPTO_PAES_S390=y
CONFIG_CRYPTO_DES_S390=y
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_DES=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_XTS=y

Then boot and check for crypto self-test failures by running
'dmesg | grep alg'.

If there are test failures, please also check whether they were already
failing prior to this patchset.

This won't cover the "paes" ("protected key AES") algorithms, however,
since those don't have self-tests. If anyone has any way to test those,
please do so.

Eric Biggers (3):
crypto: s390/aes - convert to skcipher API
crypto: s390/paes - convert to skcipher API
crypto: s390/des - convert to skcipher API

arch/s390/crypto/aes_s390.c | 609 ++++++++++++++---------------------
arch/s390/crypto/des_s390.c | 419 ++++++++++--------------
arch/s390/crypto/paes_s390.c | 414 ++++++++++--------------
3 files changed, 580 insertions(+), 862 deletions(-)

--
2.23.0


2019-10-12 20:27:20

by Eric Biggers

[permalink] [raw]
Subject: [RFT PATCH 1/3] crypto: s390/aes - convert to skcipher API

From: Eric Biggers <[email protected]>

Convert the glue code for the S390 CPACF implementations of AES-ECB,
AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the
"skcipher" API. This is needed in order for the blkcipher API to be
removed.

Note: I made CTR use the same function for encryption and decryption,
since CTR encryption and decryption are identical.

Signed-off-by: Eric Biggers <[email protected]>
---
arch/s390/crypto/aes_s390.c | 609 ++++++++++++++----------------------
1 file changed, 234 insertions(+), 375 deletions(-)

diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index 9803e96d2924..ead0b2c9881d 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -44,7 +44,7 @@ struct s390_aes_ctx {
int key_len;
unsigned long fc;
union {
- struct crypto_sync_skcipher *blk;
+ struct crypto_skcipher *skcipher;
struct crypto_cipher *cip;
} fallback;
};
@@ -54,7 +54,7 @@ struct s390_xts_ctx {
u8 pcc_key[32];
int key_len;
unsigned long fc;
- struct crypto_sync_skcipher *fallback;
+ struct crypto_skcipher *fallback;
};

struct gcm_sg_walk {
@@ -178,66 +178,41 @@ static struct crypto_alg aes_alg = {
}
};

-static int setkey_fallback_blk(struct crypto_tfm *tfm, const u8 *key,
- unsigned int len)
+static int setkey_fallback_skcipher(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int len)
{
- struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
- unsigned int ret;
-
- crypto_sync_skcipher_clear_flags(sctx->fallback.blk,
- CRYPTO_TFM_REQ_MASK);
- crypto_sync_skcipher_set_flags(sctx->fallback.blk, tfm->crt_flags &
- CRYPTO_TFM_REQ_MASK);
-
- ret = crypto_sync_skcipher_setkey(sctx->fallback.blk, key, len);
-
- tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
- tfm->crt_flags |= crypto_sync_skcipher_get_flags(sctx->fallback.blk) &
- CRYPTO_TFM_RES_MASK;
-
- return ret;
-}
-
-static int fallback_blk_dec(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- unsigned int ret;
- struct crypto_blkcipher *tfm = desc->tfm;
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm);
- SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
-
- skcipher_request_set_sync_tfm(req, sctx->fallback.blk);
- skcipher_request_set_callback(req, desc->flags, NULL, NULL);
- skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
-
- ret = crypto_skcipher_decrypt(req);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
+ int ret;

- skcipher_request_zero(req);
+ crypto_skcipher_clear_flags(sctx->fallback.skcipher,
+ CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(sctx->fallback.skcipher,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ ret = crypto_skcipher_setkey(sctx->fallback.skcipher, key, len);
+ crypto_skcipher_set_flags(tfm,
+ crypto_skcipher_get_flags(sctx->fallback.skcipher) &
+ CRYPTO_TFM_RES_MASK);
return ret;
}

-static int fallback_blk_enc(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int fallback_skcipher_crypt(struct s390_aes_ctx *sctx,
+ struct skcipher_request *req,
+ unsigned long modifier)
{
- unsigned int ret;
- struct crypto_blkcipher *tfm = desc->tfm;
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm);
- SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
-
- skcipher_request_set_sync_tfm(req, sctx->fallback.blk);
- skcipher_request_set_callback(req, desc->flags, NULL, NULL);
- skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
+ struct skcipher_request *subreq = skcipher_request_ctx(req);

- ret = crypto_skcipher_encrypt(req);
- return ret;
+ *subreq = *req;
+ skcipher_request_set_tfm(subreq, sctx->fallback.skcipher);
+ return (modifier & CPACF_DECRYPT) ?
+ crypto_skcipher_decrypt(subreq) :
+ crypto_skcipher_encrypt(subreq);
}

-static int ecb_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int ecb_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
- struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
unsigned long fc;

/* Pick the correct function code based on the key length */
@@ -248,111 +223,92 @@ static int ecb_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
/* Check if the function code is available */
sctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
if (!sctx->fc)
- return setkey_fallback_blk(tfm, in_key, key_len);
+ return setkey_fallback_skcipher(tfm, in_key, key_len);

sctx->key_len = key_len;
memcpy(sctx->key, in_key, key_len);
return 0;
}

-static int ecb_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int ecb_aes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int nbytes, n;
int ret;

- ret = blkcipher_walk_virt(desc, walk);
- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ if (unlikely(!sctx->fc))
+ return fallback_skcipher_crypt(sctx, req, modifier);
+
+ ret = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
cpacf_km(sctx->fc | modifier, sctx->key,
- walk->dst.virt.addr, walk->src.virt.addr, n);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ walk.dst.virt.addr, walk.src.virt.addr, n);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
-
return ret;
}

-static int ecb_aes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_aes_encrypt(struct skcipher_request *req)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (unlikely(!sctx->fc))
- return fallback_blk_enc(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_aes_crypt(desc, 0, &walk);
+ return ecb_aes_crypt(req, 0);
}

-static int ecb_aes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_aes_decrypt(struct skcipher_request *req)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (unlikely(!sctx->fc))
- return fallback_blk_dec(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_aes_crypt(desc, CPACF_DECRYPT, &walk);
+ return ecb_aes_crypt(req, CPACF_DECRYPT);
}

-static int fallback_init_blk(struct crypto_tfm *tfm)
+static int fallback_init_skcipher(struct crypto_skcipher *tfm)
{
- const char *name = tfm->__crt_alg->cra_name;
- struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
+ const char *name = crypto_tfm_alg_name(&tfm->base);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);

- sctx->fallback.blk = crypto_alloc_sync_skcipher(name, 0,
- CRYPTO_ALG_NEED_FALLBACK);
+ sctx->fallback.skcipher = crypto_alloc_skcipher(name, 0,
+ CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC);

- if (IS_ERR(sctx->fallback.blk)) {
+ if (IS_ERR(sctx->fallback.skcipher)) {
pr_err("Allocating AES fallback algorithm %s failed\n",
name);
- return PTR_ERR(sctx->fallback.blk);
+ return PTR_ERR(sctx->fallback.skcipher);
}

+ crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
+ crypto_skcipher_reqsize(sctx->fallback.skcipher));
return 0;
}

-static void fallback_exit_blk(struct crypto_tfm *tfm)
+static void fallback_exit_skcipher(struct crypto_skcipher *tfm)
{
- struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);

- crypto_free_sync_skcipher(sctx->fallback.blk);
+ crypto_free_skcipher(sctx->fallback.skcipher);
}

-static struct crypto_alg ecb_aes_alg = {
- .cra_name = "ecb(aes)",
- .cra_driver_name = "ecb-aes-s390",
- .cra_priority = 401, /* combo: aes + ecb + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
- CRYPTO_ALG_NEED_FALLBACK,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_aes_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_init = fallback_init_blk,
- .cra_exit = fallback_exit_blk,
- .cra_u = {
- .blkcipher = {
- .min_keysize = AES_MIN_KEY_SIZE,
- .max_keysize = AES_MAX_KEY_SIZE,
- .setkey = ecb_aes_set_key,
- .encrypt = ecb_aes_encrypt,
- .decrypt = ecb_aes_decrypt,
- }
- }
+static struct skcipher_alg ecb_aes_alg = {
+ .base.cra_name = "ecb(aes)",
+ .base.cra_driver_name = "ecb-aes-s390",
+ .base.cra_priority = 401, /* combo: aes + ecb + 1 */
+ .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_aes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .init = fallback_init_skcipher,
+ .exit = fallback_exit_skcipher,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .setkey = ecb_aes_set_key,
+ .encrypt = ecb_aes_encrypt,
+ .decrypt = ecb_aes_decrypt,
};

-static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int cbc_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
- struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
unsigned long fc;

/* Pick the correct function code based on the key length */
@@ -363,17 +319,18 @@ static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
/* Check if the function code is available */
sctx->fc = (fc && cpacf_test_func(&kmc_functions, fc)) ? fc : 0;
if (!sctx->fc)
- return setkey_fallback_blk(tfm, in_key, key_len);
+ return setkey_fallback_skcipher(tfm, in_key, key_len);

sctx->key_len = key_len;
memcpy(sctx->key, in_key, key_len);
return 0;
}

-static int cbc_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int cbc_aes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int nbytes, n;
int ret;
struct {
@@ -381,134 +338,74 @@ static int cbc_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
u8 key[AES_MAX_KEY_SIZE];
} param;

- ret = blkcipher_walk_virt(desc, walk);
- memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
+ if (unlikely(!sctx->fc))
+ return fallback_skcipher_crypt(sctx, req, modifier);
+
+ ret = skcipher_walk_virt(&walk, req, false);
+ if (ret)
+ return ret;
+ memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
memcpy(param.key, sctx->key, sctx->key_len);
- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
cpacf_kmc(sctx->fc | modifier, &param,
- walk->dst.virt.addr, walk->src.virt.addr, n);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ walk.dst.virt.addr, walk.src.virt.addr, n);
+ memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
- memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
return ret;
}

-static int cbc_aes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_aes_encrypt(struct skcipher_request *req)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (unlikely(!sctx->fc))
- return fallback_blk_enc(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_aes_crypt(desc, 0, &walk);
+ return cbc_aes_crypt(req, 0);
}

-static int cbc_aes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_aes_decrypt(struct skcipher_request *req)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (unlikely(!sctx->fc))
- return fallback_blk_dec(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_aes_crypt(desc, CPACF_DECRYPT, &walk);
+ return cbc_aes_crypt(req, CPACF_DECRYPT);
}

-static struct crypto_alg cbc_aes_alg = {
- .cra_name = "cbc(aes)",
- .cra_driver_name = "cbc-aes-s390",
- .cra_priority = 402, /* ecb-aes-s390 + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
- CRYPTO_ALG_NEED_FALLBACK,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_aes_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_init = fallback_init_blk,
- .cra_exit = fallback_exit_blk,
- .cra_u = {
- .blkcipher = {
- .min_keysize = AES_MIN_KEY_SIZE,
- .max_keysize = AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = cbc_aes_set_key,
- .encrypt = cbc_aes_encrypt,
- .decrypt = cbc_aes_decrypt,
- }
- }
+static struct skcipher_alg cbc_aes_alg = {
+ .base.cra_name = "cbc(aes)",
+ .base.cra_driver_name = "cbc-aes-s390",
+ .base.cra_priority = 402, /* ecb-aes-s390 + 1 */
+ .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_aes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .init = fallback_init_skcipher,
+ .exit = fallback_exit_skcipher,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = cbc_aes_set_key,
+ .encrypt = cbc_aes_encrypt,
+ .decrypt = cbc_aes_decrypt,
};

-static int xts_fallback_setkey(struct crypto_tfm *tfm, const u8 *key,
- unsigned int len)
-{
- struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
- unsigned int ret;
-
- crypto_sync_skcipher_clear_flags(xts_ctx->fallback,
- CRYPTO_TFM_REQ_MASK);
- crypto_sync_skcipher_set_flags(xts_ctx->fallback, tfm->crt_flags &
- CRYPTO_TFM_REQ_MASK);
-
- ret = crypto_sync_skcipher_setkey(xts_ctx->fallback, key, len);
-
- tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
- tfm->crt_flags |= crypto_sync_skcipher_get_flags(xts_ctx->fallback) &
- CRYPTO_TFM_RES_MASK;
-
- return ret;
-}
-
-static int xts_fallback_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct crypto_blkcipher *tfm = desc->tfm;
- struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm);
- SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
- unsigned int ret;
-
- skcipher_request_set_sync_tfm(req, xts_ctx->fallback);
- skcipher_request_set_callback(req, desc->flags, NULL, NULL);
- skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
-
- ret = crypto_skcipher_decrypt(req);
-
- skcipher_request_zero(req);
- return ret;
-}
-
-static int xts_fallback_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int xts_fallback_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int len)
{
- struct crypto_blkcipher *tfm = desc->tfm;
- struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm);
- SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
- unsigned int ret;
-
- skcipher_request_set_sync_tfm(req, xts_ctx->fallback);
- skcipher_request_set_callback(req, desc->flags, NULL, NULL);
- skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
-
- ret = crypto_skcipher_encrypt(req);
+ struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
+ int ret;

- skcipher_request_zero(req);
+ crypto_skcipher_clear_flags(xts_ctx->fallback, CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(xts_ctx->fallback,
+ crypto_skcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ ret = crypto_skcipher_setkey(xts_ctx->fallback, key, len);
+ crypto_skcipher_set_flags(tfm,
+ crypto_skcipher_get_flags(xts_ctx->fallback) &
+ CRYPTO_TFM_RES_MASK);
return ret;
}

-static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int xts_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
- struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
+ struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
unsigned long fc;
int err;

@@ -518,7 +415,7 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,

/* In fips mode only 128 bit or 256 bit keys are valid */
if (fips_enabled && key_len != 32 && key_len != 64) {
- tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}

@@ -539,10 +436,11 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
return 0;
}

-static int xts_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int xts_aes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int offset, nbytes, n;
int ret;
struct {
@@ -557,113 +455,100 @@ static int xts_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
u8 init[16];
} xts_param;

- ret = blkcipher_walk_virt(desc, walk);
+ if (req->cryptlen < AES_BLOCK_SIZE)
+ return -EINVAL;
+
+ if (unlikely(!xts_ctx->fc || (req->cryptlen % AES_BLOCK_SIZE) != 0)) {
+ struct skcipher_request *subreq = skcipher_request_ctx(req);
+
+ *subreq = *req;
+ skcipher_request_set_tfm(subreq, xts_ctx->fallback);
+ return (modifier & CPACF_DECRYPT) ?
+ crypto_skcipher_decrypt(subreq) :
+ crypto_skcipher_encrypt(subreq);
+ }
+
+ ret = skcipher_walk_virt(&walk, req, false);
+ if (ret)
+ return ret;
offset = xts_ctx->key_len & 0x10;
memset(pcc_param.block, 0, sizeof(pcc_param.block));
memset(pcc_param.bit, 0, sizeof(pcc_param.bit));
memset(pcc_param.xts, 0, sizeof(pcc_param.xts));
- memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
+ memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
memcpy(pcc_param.key + offset, xts_ctx->pcc_key, xts_ctx->key_len);
cpacf_pcc(xts_ctx->fc, pcc_param.key + offset);

memcpy(xts_param.key + offset, xts_ctx->key, xts_ctx->key_len);
memcpy(xts_param.init, pcc_param.xts, 16);

- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
cpacf_km(xts_ctx->fc | modifier, xts_param.key + offset,
- walk->dst.virt.addr, walk->src.virt.addr, n);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ walk.dst.virt.addr, walk.src.virt.addr, n);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
return ret;
}

-static int xts_aes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int xts_aes_encrypt(struct skcipher_request *req)
{
- struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (!nbytes)
- return -EINVAL;
-
- if (unlikely(!xts_ctx->fc || (nbytes % XTS_BLOCK_SIZE) != 0))
- return xts_fallback_encrypt(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return xts_aes_crypt(desc, 0, &walk);
+ return xts_aes_crypt(req, 0);
}

-static int xts_aes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int xts_aes_decrypt(struct skcipher_request *req)
{
- struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (!nbytes)
- return -EINVAL;
-
- if (unlikely(!xts_ctx->fc || (nbytes % XTS_BLOCK_SIZE) != 0))
- return xts_fallback_decrypt(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return xts_aes_crypt(desc, CPACF_DECRYPT, &walk);
+ return xts_aes_crypt(req, CPACF_DECRYPT);
}

-static int xts_fallback_init(struct crypto_tfm *tfm)
+static int xts_fallback_init(struct crypto_skcipher *tfm)
{
- const char *name = tfm->__crt_alg->cra_name;
- struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
+ const char *name = crypto_tfm_alg_name(&tfm->base);
+ struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);

- xts_ctx->fallback = crypto_alloc_sync_skcipher(name, 0,
- CRYPTO_ALG_NEED_FALLBACK);
+ xts_ctx->fallback = crypto_alloc_skcipher(name, 0,
+ CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC);

if (IS_ERR(xts_ctx->fallback)) {
pr_err("Allocating XTS fallback algorithm %s failed\n",
name);
return PTR_ERR(xts_ctx->fallback);
}
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
+ crypto_skcipher_reqsize(xts_ctx->fallback));
return 0;
}

-static void xts_fallback_exit(struct crypto_tfm *tfm)
+static void xts_fallback_exit(struct crypto_skcipher *tfm)
{
- struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
+ struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);

- crypto_free_sync_skcipher(xts_ctx->fallback);
+ crypto_free_skcipher(xts_ctx->fallback);
}

-static struct crypto_alg xts_aes_alg = {
- .cra_name = "xts(aes)",
- .cra_driver_name = "xts-aes-s390",
- .cra_priority = 402, /* ecb-aes-s390 + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
- CRYPTO_ALG_NEED_FALLBACK,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_xts_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_init = xts_fallback_init,
- .cra_exit = xts_fallback_exit,
- .cra_u = {
- .blkcipher = {
- .min_keysize = 2 * AES_MIN_KEY_SIZE,
- .max_keysize = 2 * AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = xts_aes_set_key,
- .encrypt = xts_aes_encrypt,
- .decrypt = xts_aes_decrypt,
- }
- }
+static struct skcipher_alg xts_aes_alg = {
+ .base.cra_name = "xts(aes)",
+ .base.cra_driver_name = "xts-aes-s390",
+ .base.cra_priority = 402, /* ecb-aes-s390 + 1 */
+ .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_xts_ctx),
+ .base.cra_module = THIS_MODULE,
+ .init = xts_fallback_init,
+ .exit = xts_fallback_exit,
+ .min_keysize = 2 * AES_MIN_KEY_SIZE,
+ .max_keysize = 2 * AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = xts_aes_set_key,
+ .encrypt = xts_aes_encrypt,
+ .decrypt = xts_aes_decrypt,
};

-static int ctr_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int ctr_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
- struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
unsigned long fc;

/* Pick the correct function code based on the key length */
@@ -674,7 +559,7 @@ static int ctr_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
/* Check if the function code is available */
sctx->fc = (fc && cpacf_test_func(&kmctr_functions, fc)) ? fc : 0;
if (!sctx->fc)
- return setkey_fallback_blk(tfm, in_key, key_len);
+ return setkey_fallback_skcipher(tfm, in_key, key_len);

sctx->key_len = key_len;
memcpy(sctx->key, in_key, key_len);
@@ -696,30 +581,34 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
return n;
}

-static int ctr_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int ctr_aes_crypt(struct skcipher_request *req)
{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
u8 buf[AES_BLOCK_SIZE], *ctrptr;
+ struct skcipher_walk walk;
unsigned int n, nbytes;
int ret, locked;

+ if (unlikely(!sctx->fc))
+ return fallback_skcipher_crypt(sctx, req, 0);
+
locked = mutex_trylock(&ctrblk_lock);

- ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ ret = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
n = AES_BLOCK_SIZE;
+
if (nbytes >= 2*AES_BLOCK_SIZE && locked)
- n = __ctrblk_init(ctrblk, walk->iv, nbytes);
- ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
- cpacf_kmctr(sctx->fc | modifier, sctx->key,
- walk->dst.virt.addr, walk->src.virt.addr,
- n, ctrptr);
+ n = __ctrblk_init(ctrblk, walk.iv, nbytes);
+ ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
+ cpacf_kmctr(sctx->fc, sctx->key, walk.dst.virt.addr,
+ walk.src.virt.addr, n, ctrptr);
if (ctrptr == ctrblk)
- memcpy(walk->iv, ctrptr + n - AES_BLOCK_SIZE,
+ memcpy(walk.iv, ctrptr + n - AES_BLOCK_SIZE,
AES_BLOCK_SIZE);
- crypto_inc(walk->iv, AES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ crypto_inc(walk.iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
if (locked)
mutex_unlock(&ctrblk_lock);
@@ -727,67 +616,33 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
* final block may be < AES_BLOCK_SIZE, copy only nbytes
*/
if (nbytes) {
- cpacf_kmctr(sctx->fc | modifier, sctx->key,
- buf, walk->src.virt.addr,
- AES_BLOCK_SIZE, walk->iv);
- memcpy(walk->dst.virt.addr, buf, nbytes);
- crypto_inc(walk->iv, AES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, 0);
+ cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
+ AES_BLOCK_SIZE, walk.iv);
+ memcpy(walk.dst.virt.addr, buf, nbytes);
+ crypto_inc(walk.iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, 0);
}

return ret;
}

-static int ctr_aes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (unlikely(!sctx->fc))
- return fallback_blk_enc(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_aes_crypt(desc, 0, &walk);
-}
-
-static int ctr_aes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
- struct blkcipher_walk walk;
-
- if (unlikely(!sctx->fc))
- return fallback_blk_dec(desc, dst, src, nbytes);
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_aes_crypt(desc, CPACF_DECRYPT, &walk);
-}
-
-static struct crypto_alg ctr_aes_alg = {
- .cra_name = "ctr(aes)",
- .cra_driver_name = "ctr-aes-s390",
- .cra_priority = 402, /* ecb-aes-s390 + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
- CRYPTO_ALG_NEED_FALLBACK,
- .cra_blocksize = 1,
- .cra_ctxsize = sizeof(struct s390_aes_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_init = fallback_init_blk,
- .cra_exit = fallback_exit_blk,
- .cra_u = {
- .blkcipher = {
- .min_keysize = AES_MIN_KEY_SIZE,
- .max_keysize = AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = ctr_aes_set_key,
- .encrypt = ctr_aes_encrypt,
- .decrypt = ctr_aes_decrypt,
- }
- }
+static struct skcipher_alg ctr_aes_alg = {
+ .base.cra_name = "ctr(aes)",
+ .base.cra_driver_name = "ctr-aes-s390",
+ .base.cra_priority = 402, /* ecb-aes-s390 + 1 */
+ .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ .base.cra_blocksize = 1,
+ .base.cra_ctxsize = sizeof(struct s390_aes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .init = fallback_init_skcipher,
+ .exit = fallback_exit_skcipher,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = ctr_aes_set_key,
+ .encrypt = ctr_aes_crypt,
+ .decrypt = ctr_aes_crypt,
+ .chunksize = AES_BLOCK_SIZE,
};

static int gcm_aes_setkey(struct crypto_aead *tfm, const u8 *key,
@@ -1116,24 +971,27 @@ static struct aead_alg gcm_aes_aead = {
},
};

-static struct crypto_alg *aes_s390_algs_ptr[5];
-static int aes_s390_algs_num;
+static struct crypto_alg *aes_s390_alg;
+static struct skcipher_alg *aes_s390_skcipher_algs[4];
+static int aes_s390_skciphers_num;
static struct aead_alg *aes_s390_aead_alg;

-static int aes_s390_register_alg(struct crypto_alg *alg)
+static int aes_s390_register_skcipher(struct skcipher_alg *alg)
{
int ret;

- ret = crypto_register_alg(alg);
+ ret = crypto_register_skcipher(alg);
if (!ret)
- aes_s390_algs_ptr[aes_s390_algs_num++] = alg;
+ aes_s390_skcipher_algs[aes_s390_skciphers_num++] = alg;
return ret;
}

static void aes_s390_fini(void)
{
- while (aes_s390_algs_num--)
- crypto_unregister_alg(aes_s390_algs_ptr[aes_s390_algs_num]);
+ if (aes_s390_alg)
+ crypto_unregister_alg(aes_s390_alg);
+ while (aes_s390_skciphers_num--)
+ crypto_unregister_skcipher(aes_s390_skcipher_algs[aes_s390_skciphers_num]);
if (ctrblk)
free_page((unsigned long) ctrblk);

@@ -1154,10 +1012,11 @@ static int __init aes_s390_init(void)
if (cpacf_test_func(&km_functions, CPACF_KM_AES_128) ||
cpacf_test_func(&km_functions, CPACF_KM_AES_192) ||
cpacf_test_func(&km_functions, CPACF_KM_AES_256)) {
- ret = aes_s390_register_alg(&aes_alg);
+ ret = crypto_register_alg(&aes_alg);
if (ret)
goto out_err;
- ret = aes_s390_register_alg(&ecb_aes_alg);
+ aes_s390_alg = &aes_alg;
+ ret = aes_s390_register_skcipher(&ecb_aes_alg);
if (ret)
goto out_err;
}
@@ -1165,14 +1024,14 @@ static int __init aes_s390_init(void)
if (cpacf_test_func(&kmc_functions, CPACF_KMC_AES_128) ||
cpacf_test_func(&kmc_functions, CPACF_KMC_AES_192) ||
cpacf_test_func(&kmc_functions, CPACF_KMC_AES_256)) {
- ret = aes_s390_register_alg(&cbc_aes_alg);
+ ret = aes_s390_register_skcipher(&cbc_aes_alg);
if (ret)
goto out_err;
}

if (cpacf_test_func(&km_functions, CPACF_KM_XTS_128) ||
cpacf_test_func(&km_functions, CPACF_KM_XTS_256)) {
- ret = aes_s390_register_alg(&xts_aes_alg);
+ ret = aes_s390_register_skcipher(&xts_aes_alg);
if (ret)
goto out_err;
}
@@ -1185,7 +1044,7 @@ static int __init aes_s390_init(void)
ret = -ENOMEM;
goto out_err;
}
- ret = aes_s390_register_alg(&ctr_aes_alg);
+ ret = aes_s390_register_skcipher(&ctr_aes_alg);
if (ret)
goto out_err;
}
--
2.23.0

2019-10-12 20:27:29

by Eric Biggers

[permalink] [raw]
Subject: [RFT PATCH 2/3] crypto: s390/paes - convert to skcipher API

From: Eric Biggers <[email protected]>

Convert the glue code for the S390 CPACF protected key implementations
of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated
"blkcipher" API to the "skcipher" API. This is needed in order for the
blkcipher API to be removed.

Note: I made CTR use the same function for encryption and decryption,
since CTR encryption and decryption are identical.

Signed-off-by: Eric Biggers <[email protected]>
---
arch/s390/crypto/paes_s390.c | 414 +++++++++++++++--------------------
1 file changed, 174 insertions(+), 240 deletions(-)

diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 6184dceed340..c7119c617b6e 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -21,6 +21,7 @@
#include <linux/cpufeature.h>
#include <linux/init.h>
#include <linux/spinlock.h>
+#include <crypto/internal/skcipher.h>
#include <crypto/xts.h>
#include <asm/cpacf.h>
#include <asm/pkey.h>
@@ -123,27 +124,27 @@ static int __paes_set_key(struct s390_paes_ctx *ctx)
return ctx->fc ? 0 : -EINVAL;
}

-static int ecb_paes_init(struct crypto_tfm *tfm)
+static int ecb_paes_init(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

ctx->kb.key = NULL;

return 0;
}

-static void ecb_paes_exit(struct crypto_tfm *tfm)
+static void ecb_paes_exit(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb);
}

-static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int ecb_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
int rc;
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb);
rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
@@ -151,91 +152,75 @@ static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
return rc;

if (__paes_set_key(ctx)) {
- tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
return 0;
}

-static int ecb_paes_crypt(struct blkcipher_desc *desc,
- unsigned long modifier,
- struct blkcipher_walk *walk)
+static int ecb_paes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int nbytes, n, k;
int ret;

- ret = blkcipher_walk_virt(desc, walk);
- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ ret = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
k = cpacf_km(ctx->fc | modifier, ctx->pk.protkey,
- walk->dst.virt.addr, walk->src.virt.addr, n);
+ walk.dst.virt.addr, walk.src.virt.addr, n);
if (k)
- ret = blkcipher_walk_done(desc, walk, nbytes - k);
+ ret = skcipher_walk_done(&walk, nbytes - k);
if (k < n) {
if (__paes_set_key(ctx) != 0)
- return blkcipher_walk_done(desc, walk, -EIO);
+ return skcipher_walk_done(&walk, -EIO);
}
}
return ret;
}

-static int ecb_paes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_paes_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_paes_crypt(desc, CPACF_ENCRYPT, &walk);
+ return ecb_paes_crypt(req, 0);
}

-static int ecb_paes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_paes_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_paes_crypt(desc, CPACF_DECRYPT, &walk);
+ return ecb_paes_crypt(req, CPACF_DECRYPT);
}

-static struct crypto_alg ecb_paes_alg = {
- .cra_name = "ecb(paes)",
- .cra_driver_name = "ecb-paes-s390",
- .cra_priority = 401, /* combo: aes + ecb + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_paes_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list),
- .cra_init = ecb_paes_init,
- .cra_exit = ecb_paes_exit,
- .cra_u = {
- .blkcipher = {
- .min_keysize = PAES_MIN_KEYSIZE,
- .max_keysize = PAES_MAX_KEYSIZE,
- .setkey = ecb_paes_set_key,
- .encrypt = ecb_paes_encrypt,
- .decrypt = ecb_paes_decrypt,
- }
- }
+static struct skcipher_alg ecb_paes_alg = {
+ .base.cra_name = "ecb(paes)",
+ .base.cra_driver_name = "ecb-paes-s390",
+ .base.cra_priority = 401, /* combo: aes + ecb + 1 */
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
+ .init = ecb_paes_init,
+ .exit = ecb_paes_exit,
+ .min_keysize = PAES_MIN_KEYSIZE,
+ .max_keysize = PAES_MAX_KEYSIZE,
+ .setkey = ecb_paes_set_key,
+ .encrypt = ecb_paes_encrypt,
+ .decrypt = ecb_paes_decrypt,
};

-static int cbc_paes_init(struct crypto_tfm *tfm)
+static int cbc_paes_init(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

ctx->kb.key = NULL;

return 0;
}

-static void cbc_paes_exit(struct crypto_tfm *tfm)
+static void cbc_paes_exit(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb);
}
@@ -258,11 +243,11 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
return ctx->fc ? 0 : -EINVAL;
}

-static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int cbc_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
int rc;
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb);
rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
@@ -270,16 +255,17 @@ static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
return rc;

if (__cbc_paes_set_key(ctx)) {
- tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
return 0;
}

-static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int cbc_paes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int nbytes, n, k;
int ret;
struct {
@@ -287,73 +273,60 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
u8 key[MAXPROTKEYSIZE];
} param;

- ret = blkcipher_walk_virt(desc, walk);
- memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_virt(&walk, req, false);
+ if (ret)
+ return ret;
+ memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
k = cpacf_kmc(ctx->fc | modifier, &param,
- walk->dst.virt.addr, walk->src.virt.addr, n);
- if (k)
- ret = blkcipher_walk_done(desc, walk, nbytes - k);
+ walk.dst.virt.addr, walk.src.virt.addr, n);
+ if (k) {
+ memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, nbytes - k);
+ }
if (k < n) {
if (__cbc_paes_set_key(ctx) != 0)
- return blkcipher_walk_done(desc, walk, -EIO);
+ return skcipher_walk_done(&walk, -EIO);
memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
}
}
- memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
return ret;
}

-static int cbc_paes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_paes_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_paes_crypt(desc, 0, &walk);
+ return cbc_paes_crypt(req, 0);
}

-static int cbc_paes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_paes_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_paes_crypt(desc, CPACF_DECRYPT, &walk);
+ return cbc_paes_crypt(req, CPACF_DECRYPT);
}

-static struct crypto_alg cbc_paes_alg = {
- .cra_name = "cbc(paes)",
- .cra_driver_name = "cbc-paes-s390",
- .cra_priority = 402, /* ecb-paes-s390 + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_paes_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list),
- .cra_init = cbc_paes_init,
- .cra_exit = cbc_paes_exit,
- .cra_u = {
- .blkcipher = {
- .min_keysize = PAES_MIN_KEYSIZE,
- .max_keysize = PAES_MAX_KEYSIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = cbc_paes_set_key,
- .encrypt = cbc_paes_encrypt,
- .decrypt = cbc_paes_decrypt,
- }
- }
+static struct skcipher_alg cbc_paes_alg = {
+ .base.cra_name = "cbc(paes)",
+ .base.cra_driver_name = "cbc-paes-s390",
+ .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(cbc_paes_alg.base.cra_list),
+ .init = cbc_paes_init,
+ .exit = cbc_paes_exit,
+ .min_keysize = PAES_MIN_KEYSIZE,
+ .max_keysize = PAES_MAX_KEYSIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = cbc_paes_set_key,
+ .encrypt = cbc_paes_encrypt,
+ .decrypt = cbc_paes_decrypt,
};

-static int xts_paes_init(struct crypto_tfm *tfm)
+static int xts_paes_init(struct crypto_skcipher *tfm)
{
- struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);

ctx->kb[0].key = NULL;
ctx->kb[1].key = NULL;
@@ -361,9 +334,9 @@ static int xts_paes_init(struct crypto_tfm *tfm)
return 0;
}

-static void xts_paes_exit(struct crypto_tfm *tfm)
+static void xts_paes_exit(struct crypto_skcipher *tfm)
{
- struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb[0]);
_free_kb_keybuf(&ctx->kb[1]);
@@ -391,11 +364,11 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
return ctx->fc ? 0 : -EINVAL;
}

-static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int xts_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int xts_key_len)
{
int rc;
- struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
u8 ckey[2 * AES_MAX_KEY_SIZE];
unsigned int ckey_len, key_len;

@@ -414,7 +387,7 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
return rc;

if (__xts_paes_set_key(ctx)) {
- tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}

@@ -427,13 +400,14 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
AES_KEYSIZE_128 : AES_KEYSIZE_256;
memcpy(ckey, ctx->pk[0].protkey, ckey_len);
memcpy(ckey + ckey_len, ctx->pk[1].protkey, ckey_len);
- return xts_check_key(tfm, ckey, 2*ckey_len);
+ return xts_verify_key(tfm, ckey, 2*ckey_len);
}

-static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int xts_paes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- struct s390_pxts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int keylen, offset, nbytes, n, k;
int ret;
struct {
@@ -448,90 +422,76 @@ static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
u8 init[16];
} xts_param;

- ret = blkcipher_walk_virt(desc, walk);
+ ret = skcipher_walk_virt(&walk, req, false);
+ if (ret)
+ return ret;
keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 48 : 64;
offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 16 : 0;
retry:
memset(&pcc_param, 0, sizeof(pcc_param));
- memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
+ memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
cpacf_pcc(ctx->fc, pcc_param.key + offset);

memcpy(xts_param.key + offset, ctx->pk[0].protkey, keylen);
memcpy(xts_param.init, pcc_param.xts, 16);

- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
k = cpacf_km(ctx->fc | modifier, xts_param.key + offset,
- walk->dst.virt.addr, walk->src.virt.addr, n);
+ walk.dst.virt.addr, walk.src.virt.addr, n);
if (k)
- ret = blkcipher_walk_done(desc, walk, nbytes - k);
+ ret = skcipher_walk_done(&walk, nbytes - k);
if (k < n) {
if (__xts_paes_set_key(ctx) != 0)
- return blkcipher_walk_done(desc, walk, -EIO);
+ return skcipher_walk_done(&walk, -EIO);
goto retry;
}
}
return ret;
}

-static int xts_paes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int xts_paes_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return xts_paes_crypt(desc, 0, &walk);
+ return xts_paes_crypt(req, 0);
}

-static int xts_paes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int xts_paes_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return xts_paes_crypt(desc, CPACF_DECRYPT, &walk);
+ return xts_paes_crypt(req, CPACF_DECRYPT);
}

-static struct crypto_alg xts_paes_alg = {
- .cra_name = "xts(paes)",
- .cra_driver_name = "xts-paes-s390",
- .cra_priority = 402, /* ecb-paes-s390 + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_pxts_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list),
- .cra_init = xts_paes_init,
- .cra_exit = xts_paes_exit,
- .cra_u = {
- .blkcipher = {
- .min_keysize = 2 * PAES_MIN_KEYSIZE,
- .max_keysize = 2 * PAES_MAX_KEYSIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = xts_paes_set_key,
- .encrypt = xts_paes_encrypt,
- .decrypt = xts_paes_decrypt,
- }
- }
+static struct skcipher_alg xts_paes_alg = {
+ .base.cra_name = "xts(paes)",
+ .base.cra_driver_name = "xts-paes-s390",
+ .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_pxts_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(xts_paes_alg.base.cra_list),
+ .init = xts_paes_init,
+ .exit = xts_paes_exit,
+ .min_keysize = 2 * PAES_MIN_KEYSIZE,
+ .max_keysize = 2 * PAES_MAX_KEYSIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = xts_paes_set_key,
+ .encrypt = xts_paes_encrypt,
+ .decrypt = xts_paes_decrypt,
};

-static int ctr_paes_init(struct crypto_tfm *tfm)
+static int ctr_paes_init(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

ctx->kb.key = NULL;

return 0;
}

-static void ctr_paes_exit(struct crypto_tfm *tfm)
+static void ctr_paes_exit(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb);
}
@@ -555,11 +515,11 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
return ctx->fc ? 0 : -EINVAL;
}

-static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int ctr_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
int rc;
- struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);

_free_kb_keybuf(&ctx->kb);
rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
@@ -567,7 +527,7 @@ static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
return rc;

if (__ctr_paes_set_key(ctx)) {
- tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
return 0;
@@ -588,37 +548,37 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
return n;
}

-static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
- struct blkcipher_walk *walk)
+static int ctr_paes_crypt(struct skcipher_request *req)
{
- struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
u8 buf[AES_BLOCK_SIZE], *ctrptr;
+ struct skcipher_walk walk;
unsigned int nbytes, n, k;
int ret, locked;

locked = spin_trylock(&ctrblk_lock);

- ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
- while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ ret = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
n = AES_BLOCK_SIZE;
if (nbytes >= 2*AES_BLOCK_SIZE && locked)
- n = __ctrblk_init(ctrblk, walk->iv, nbytes);
- ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
- k = cpacf_kmctr(ctx->fc | modifier, ctx->pk.protkey,
- walk->dst.virt.addr, walk->src.virt.addr,
- n, ctrptr);
+ n = __ctrblk_init(ctrblk, walk.iv, nbytes);
+ ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
+ k = cpacf_kmctr(ctx->fc, ctx->pk.protkey, walk.dst.virt.addr,
+ walk.src.virt.addr, n, ctrptr);
if (k) {
if (ctrptr == ctrblk)
- memcpy(walk->iv, ctrptr + k - AES_BLOCK_SIZE,
+ memcpy(walk.iv, ctrptr + k - AES_BLOCK_SIZE,
AES_BLOCK_SIZE);
- crypto_inc(walk->iv, AES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ crypto_inc(walk.iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
if (k < n) {
if (__ctr_paes_set_key(ctx) != 0) {
if (locked)
spin_unlock(&ctrblk_lock);
- return blkcipher_walk_done(desc, walk, -EIO);
+ return skcipher_walk_done(&walk, -EIO);
}
}
}
@@ -629,80 +589,54 @@ static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
*/
if (nbytes) {
while (1) {
- if (cpacf_kmctr(ctx->fc | modifier,
- ctx->pk.protkey, buf,
- walk->src.virt.addr, AES_BLOCK_SIZE,
- walk->iv) == AES_BLOCK_SIZE)
+ if (cpacf_kmctr(ctx->fc, ctx->pk.protkey, buf,
+ walk.src.virt.addr, AES_BLOCK_SIZE,
+ walk.iv) == AES_BLOCK_SIZE)
break;
if (__ctr_paes_set_key(ctx) != 0)
- return blkcipher_walk_done(desc, walk, -EIO);
+ return skcipher_walk_done(&walk, -EIO);
}
- memcpy(walk->dst.virt.addr, buf, nbytes);
- crypto_inc(walk->iv, AES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, 0);
+ memcpy(walk.dst.virt.addr, buf, nbytes);
+ crypto_inc(walk.iv, AES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, 0);
}

return ret;
}

-static int ctr_paes_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_paes_crypt(desc, 0, &walk);
-}
-
-static int ctr_paes_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_paes_crypt(desc, CPACF_DECRYPT, &walk);
-}
-
-static struct crypto_alg ctr_paes_alg = {
- .cra_name = "ctr(paes)",
- .cra_driver_name = "ctr-paes-s390",
- .cra_priority = 402, /* ecb-paes-s390 + 1 */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = 1,
- .cra_ctxsize = sizeof(struct s390_paes_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT(ctr_paes_alg.cra_list),
- .cra_init = ctr_paes_init,
- .cra_exit = ctr_paes_exit,
- .cra_u = {
- .blkcipher = {
- .min_keysize = PAES_MIN_KEYSIZE,
- .max_keysize = PAES_MAX_KEYSIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = ctr_paes_set_key,
- .encrypt = ctr_paes_encrypt,
- .decrypt = ctr_paes_decrypt,
- }
- }
+static struct skcipher_alg ctr_paes_alg = {
+ .base.cra_name = "ctr(paes)",
+ .base.cra_driver_name = "ctr-paes-s390",
+ .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
+ .base.cra_blocksize = 1,
+ .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(ctr_paes_alg.base.cra_list),
+ .init = ctr_paes_init,
+ .exit = ctr_paes_exit,
+ .min_keysize = PAES_MIN_KEYSIZE,
+ .max_keysize = PAES_MAX_KEYSIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = ctr_paes_set_key,
+ .encrypt = ctr_paes_crypt,
+ .decrypt = ctr_paes_crypt,
+ .chunksize = AES_BLOCK_SIZE,
};

-static inline void __crypto_unregister_alg(struct crypto_alg *alg)
+static inline void __crypto_unregister_skcipher(struct skcipher_alg *alg)
{
- if (!list_empty(&alg->cra_list))
- crypto_unregister_alg(alg);
+ if (!list_empty(&alg->base.cra_list))
+ crypto_unregister_skcipher(alg);
}

static void paes_s390_fini(void)
{
if (ctrblk)
free_page((unsigned long) ctrblk);
- __crypto_unregister_alg(&ctr_paes_alg);
- __crypto_unregister_alg(&xts_paes_alg);
- __crypto_unregister_alg(&cbc_paes_alg);
- __crypto_unregister_alg(&ecb_paes_alg);
+ __crypto_unregister_skcipher(&ctr_paes_alg);
+ __crypto_unregister_skcipher(&xts_paes_alg);
+ __crypto_unregister_skcipher(&cbc_paes_alg);
+ __crypto_unregister_skcipher(&ecb_paes_alg);
}

static int __init paes_s390_init(void)
@@ -717,7 +651,7 @@ static int __init paes_s390_init(void)
if (cpacf_test_func(&km_functions, CPACF_KM_PAES_128) ||
cpacf_test_func(&km_functions, CPACF_KM_PAES_192) ||
cpacf_test_func(&km_functions, CPACF_KM_PAES_256)) {
- ret = crypto_register_alg(&ecb_paes_alg);
+ ret = crypto_register_skcipher(&ecb_paes_alg);
if (ret)
goto out_err;
}
@@ -725,14 +659,14 @@ static int __init paes_s390_init(void)
if (cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_128) ||
cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_192) ||
cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_256)) {
- ret = crypto_register_alg(&cbc_paes_alg);
+ ret = crypto_register_skcipher(&cbc_paes_alg);
if (ret)
goto out_err;
}

if (cpacf_test_func(&km_functions, CPACF_KM_PXTS_128) ||
cpacf_test_func(&km_functions, CPACF_KM_PXTS_256)) {
- ret = crypto_register_alg(&xts_paes_alg);
+ ret = crypto_register_skcipher(&xts_paes_alg);
if (ret)
goto out_err;
}
@@ -740,7 +674,7 @@ static int __init paes_s390_init(void)
if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_128) ||
cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_192) ||
cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_256)) {
- ret = crypto_register_alg(&ctr_paes_alg);
+ ret = crypto_register_skcipher(&ctr_paes_alg);
if (ret)
goto out_err;
ctrblk = (u8 *) __get_free_page(GFP_KERNEL);
--
2.23.0

2019-10-12 20:27:58

by Eric Biggers

[permalink] [raw]
Subject: [RFT PATCH 3/3] crypto: s390/des - convert to skcipher API

From: Eric Biggers <[email protected]>

Convert the glue code for the S390 CPACF implementations of DES-ECB,
DES-CBC, DES-CTR, 3DES-ECB, 3DES-CBC, and 3DES-CTR from the deprecated
"blkcipher" API to the "skcipher" API. This is needed in order for the
blkcipher API to be removed.

Note: I made CTR use the same function for encryption and decryption,
since CTR encryption and decryption are identical.

Signed-off-by: Eric Biggers <[email protected]>
---
arch/s390/crypto/des_s390.c | 419 +++++++++++++++---------------------
1 file changed, 172 insertions(+), 247 deletions(-)

diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c
index 439b100c6f2e..bfbafd35bcbd 100644
--- a/arch/s390/crypto/des_s390.c
+++ b/arch/s390/crypto/des_s390.c
@@ -17,6 +17,7 @@
#include <linux/mutex.h>
#include <crypto/algapi.h>
#include <crypto/internal/des.h>
+#include <crypto/internal/skcipher.h>
#include <asm/cpacf.h>

#define DES3_KEY_SIZE (3 * DES_KEY_SIZE)
@@ -45,6 +46,12 @@ static int des_setkey(struct crypto_tfm *tfm, const u8 *key,
return 0;
}

+static int des_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ return des_setkey(crypto_skcipher_tfm(tfm), key, key_len);
+}
+
static void s390_des_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
@@ -79,28 +86,30 @@ static struct crypto_alg des_alg = {
}
};

-static int ecb_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
- struct blkcipher_walk *walk)
+static int ecb_desall_crypt(struct skcipher_request *req, unsigned long fc)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_des_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int nbytes, n;
int ret;

- ret = blkcipher_walk_virt(desc, walk);
- while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
+ ret = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(DES_BLOCK_SIZE - 1);
- cpacf_km(fc, ctx->key, walk->dst.virt.addr,
- walk->src.virt.addr, n);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ cpacf_km(fc, ctx->key, walk.dst.virt.addr,
+ walk.src.virt.addr, n);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
return ret;
}

-static int cbc_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
- struct blkcipher_walk *walk)
+static int cbc_desall_crypt(struct skcipher_request *req, unsigned long fc)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_des_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk walk;
unsigned int nbytes, n;
int ret;
struct {
@@ -108,99 +117,69 @@ static int cbc_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
u8 key[DES3_KEY_SIZE];
} param;

- ret = blkcipher_walk_virt(desc, walk);
- memcpy(param.iv, walk->iv, DES_BLOCK_SIZE);
+ ret = skcipher_walk_virt(&walk, req, false);
+ if (ret)
+ return ret;
+ memcpy(param.iv, walk.iv, DES_BLOCK_SIZE);
memcpy(param.key, ctx->key, DES3_KEY_SIZE);
- while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
+ while ((nbytes = walk.nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(DES_BLOCK_SIZE - 1);
- cpacf_kmc(fc, &param, walk->dst.virt.addr,
- walk->src.virt.addr, n);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ cpacf_kmc(fc, &param, walk.dst.virt.addr,
+ walk.src.virt.addr, n);
+ memcpy(walk.iv, param.iv, DES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
- memcpy(walk->iv, param.iv, DES_BLOCK_SIZE);
return ret;
}

-static int ecb_des_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_des_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, CPACF_KM_DEA, &walk);
+ return ecb_desall_crypt(req, CPACF_KM_DEA);
}

-static int ecb_des_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_des_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, CPACF_KM_DEA | CPACF_DECRYPT, &walk);
+ return ecb_desall_crypt(req, CPACF_KM_DEA | CPACF_DECRYPT);
}

-static struct crypto_alg ecb_des_alg = {
- .cra_name = "ecb(des)",
- .cra_driver_name = "ecb-des-s390",
- .cra_priority = 400, /* combo: des + ecb */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = DES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_des_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_u = {
- .blkcipher = {
- .min_keysize = DES_KEY_SIZE,
- .max_keysize = DES_KEY_SIZE,
- .setkey = des_setkey,
- .encrypt = ecb_des_encrypt,
- .decrypt = ecb_des_decrypt,
- }
- }
+static struct skcipher_alg ecb_des_alg = {
+ .base.cra_name = "ecb(des)",
+ .base.cra_driver_name = "ecb-des-s390",
+ .base.cra_priority = 400, /* combo: des + ecb */
+ .base.cra_blocksize = DES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_des_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = DES_KEY_SIZE,
+ .max_keysize = DES_KEY_SIZE,
+ .setkey = des_setkey_skcipher,
+ .encrypt = ecb_des_encrypt,
+ .decrypt = ecb_des_decrypt,
};

-static int cbc_des_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_des_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, CPACF_KMC_DEA, &walk);
+ return cbc_desall_crypt(req, CPACF_KMC_DEA);
}

-static int cbc_des_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_des_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, CPACF_KMC_DEA | CPACF_DECRYPT, &walk);
+ return cbc_desall_crypt(req, CPACF_KMC_DEA | CPACF_DECRYPT);
}

-static struct crypto_alg cbc_des_alg = {
- .cra_name = "cbc(des)",
- .cra_driver_name = "cbc-des-s390",
- .cra_priority = 400, /* combo: des + cbc */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = DES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_des_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_u = {
- .blkcipher = {
- .min_keysize = DES_KEY_SIZE,
- .max_keysize = DES_KEY_SIZE,
- .ivsize = DES_BLOCK_SIZE,
- .setkey = des_setkey,
- .encrypt = cbc_des_encrypt,
- .decrypt = cbc_des_decrypt,
- }
- }
+static struct skcipher_alg cbc_des_alg = {
+ .base.cra_name = "cbc(des)",
+ .base.cra_driver_name = "cbc-des-s390",
+ .base.cra_priority = 400, /* combo: des + cbc */
+ .base.cra_blocksize = DES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_des_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = DES_KEY_SIZE,
+ .max_keysize = DES_KEY_SIZE,
+ .ivsize = DES_BLOCK_SIZE,
+ .setkey = des_setkey_skcipher,
+ .encrypt = cbc_des_encrypt,
+ .decrypt = cbc_des_decrypt,
};

/*
@@ -232,6 +211,12 @@ static int des3_setkey(struct crypto_tfm *tfm, const u8 *key,
return 0;
}

+static int des3_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int key_len)
+{
+ return des3_setkey(crypto_skcipher_tfm(tfm), key, key_len);
+}
+
static void des3_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
@@ -266,87 +251,53 @@ static struct crypto_alg des3_alg = {
}
};

-static int ecb_des3_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_des3_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, CPACF_KM_TDEA_192, &walk);
+ return ecb_desall_crypt(req, CPACF_KM_TDEA_192);
}

-static int ecb_des3_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ecb_des3_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, CPACF_KM_TDEA_192 | CPACF_DECRYPT,
- &walk);
+ return ecb_desall_crypt(req, CPACF_KM_TDEA_192 | CPACF_DECRYPT);
}

-static struct crypto_alg ecb_des3_alg = {
- .cra_name = "ecb(des3_ede)",
- .cra_driver_name = "ecb-des3_ede-s390",
- .cra_priority = 400, /* combo: des3 + ecb */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = DES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_des_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_u = {
- .blkcipher = {
- .min_keysize = DES3_KEY_SIZE,
- .max_keysize = DES3_KEY_SIZE,
- .setkey = des3_setkey,
- .encrypt = ecb_des3_encrypt,
- .decrypt = ecb_des3_decrypt,
- }
- }
+static struct skcipher_alg ecb_des3_alg = {
+ .base.cra_name = "ecb(des3_ede)",
+ .base.cra_driver_name = "ecb-des3_ede-s390",
+ .base.cra_priority = 400, /* combo: des3 + ecb */
+ .base.cra_blocksize = DES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_des_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = DES3_KEY_SIZE,
+ .max_keysize = DES3_KEY_SIZE,
+ .setkey = des3_setkey_skcipher,
+ .encrypt = ecb_des3_encrypt,
+ .decrypt = ecb_des3_decrypt,
};

-static int cbc_des3_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_des3_encrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, CPACF_KMC_TDEA_192, &walk);
+ return cbc_desall_crypt(req, CPACF_KMC_TDEA_192);
}

-static int cbc_des3_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int cbc_des3_decrypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, CPACF_KMC_TDEA_192 | CPACF_DECRYPT,
- &walk);
+ return cbc_desall_crypt(req, CPACF_KMC_TDEA_192 | CPACF_DECRYPT);
}

-static struct crypto_alg cbc_des3_alg = {
- .cra_name = "cbc(des3_ede)",
- .cra_driver_name = "cbc-des3_ede-s390",
- .cra_priority = 400, /* combo: des3 + cbc */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = DES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct s390_des_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_u = {
- .blkcipher = {
- .min_keysize = DES3_KEY_SIZE,
- .max_keysize = DES3_KEY_SIZE,
- .ivsize = DES_BLOCK_SIZE,
- .setkey = des3_setkey,
- .encrypt = cbc_des3_encrypt,
- .decrypt = cbc_des3_decrypt,
- }
- }
+static struct skcipher_alg cbc_des3_alg = {
+ .base.cra_name = "cbc(des3_ede)",
+ .base.cra_driver_name = "cbc-des3_ede-s390",
+ .base.cra_priority = 400, /* combo: des3 + cbc */
+ .base.cra_blocksize = DES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_des_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = DES3_KEY_SIZE,
+ .max_keysize = DES3_KEY_SIZE,
+ .ivsize = DES_BLOCK_SIZE,
+ .setkey = des3_setkey_skcipher,
+ .encrypt = cbc_des3_encrypt,
+ .decrypt = cbc_des3_decrypt,
};

static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
@@ -364,128 +315,90 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
return n;
}

-static int ctr_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
- struct blkcipher_walk *walk)
+static int ctr_desall_crypt(struct skcipher_request *req, unsigned long fc)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_des_ctx *ctx = crypto_skcipher_ctx(tfm);
u8 buf[DES_BLOCK_SIZE], *ctrptr;
+ struct skcipher_walk walk;
unsigned int n, nbytes;
int ret, locked;

locked = mutex_trylock(&ctrblk_lock);

- ret = blkcipher_walk_virt_block(desc, walk, DES_BLOCK_SIZE);
- while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
+ ret = skcipher_walk_virt(&walk, req, false);
+ while ((nbytes = walk.nbytes) >= DES_BLOCK_SIZE) {
n = DES_BLOCK_SIZE;
if (nbytes >= 2*DES_BLOCK_SIZE && locked)
- n = __ctrblk_init(ctrblk, walk->iv, nbytes);
- ctrptr = (n > DES_BLOCK_SIZE) ? ctrblk : walk->iv;
- cpacf_kmctr(fc, ctx->key, walk->dst.virt.addr,
- walk->src.virt.addr, n, ctrptr);
+ n = __ctrblk_init(ctrblk, walk.iv, nbytes);
+ ctrptr = (n > DES_BLOCK_SIZE) ? ctrblk : walk.iv;
+ cpacf_kmctr(fc, ctx->key, walk.dst.virt.addr,
+ walk.src.virt.addr, n, ctrptr);
if (ctrptr == ctrblk)
- memcpy(walk->iv, ctrptr + n - DES_BLOCK_SIZE,
+ memcpy(walk.iv, ctrptr + n - DES_BLOCK_SIZE,
DES_BLOCK_SIZE);
- crypto_inc(walk->iv, DES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ crypto_inc(walk.iv, DES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, nbytes - n);
}
if (locked)
mutex_unlock(&ctrblk_lock);
/* final block may be < DES_BLOCK_SIZE, copy only nbytes */
if (nbytes) {
- cpacf_kmctr(fc, ctx->key, buf, walk->src.virt.addr,
- DES_BLOCK_SIZE, walk->iv);
- memcpy(walk->dst.virt.addr, buf, nbytes);
- crypto_inc(walk->iv, DES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, 0);
+ cpacf_kmctr(fc, ctx->key, buf, walk.src.virt.addr,
+ DES_BLOCK_SIZE, walk.iv);
+ memcpy(walk.dst.virt.addr, buf, nbytes);
+ crypto_inc(walk.iv, DES_BLOCK_SIZE);
+ ret = skcipher_walk_done(&walk, 0);
}
return ret;
}

-static int ctr_des_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, CPACF_KMCTR_DEA, &walk);
-}
-
-static int ctr_des_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ctr_des_crypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, CPACF_KMCTR_DEA | CPACF_DECRYPT, &walk);
+ return ctr_desall_crypt(req, CPACF_KMCTR_DEA);
}

-static struct crypto_alg ctr_des_alg = {
- .cra_name = "ctr(des)",
- .cra_driver_name = "ctr-des-s390",
- .cra_priority = 400, /* combo: des + ctr */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = 1,
- .cra_ctxsize = sizeof(struct s390_des_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_u = {
- .blkcipher = {
- .min_keysize = DES_KEY_SIZE,
- .max_keysize = DES_KEY_SIZE,
- .ivsize = DES_BLOCK_SIZE,
- .setkey = des_setkey,
- .encrypt = ctr_des_encrypt,
- .decrypt = ctr_des_decrypt,
- }
- }
+static struct skcipher_alg ctr_des_alg = {
+ .base.cra_name = "ctr(des)",
+ .base.cra_driver_name = "ctr-des-s390",
+ .base.cra_priority = 400, /* combo: des + ctr */
+ .base.cra_blocksize = 1,
+ .base.cra_ctxsize = sizeof(struct s390_des_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = DES_KEY_SIZE,
+ .max_keysize = DES_KEY_SIZE,
+ .ivsize = DES_BLOCK_SIZE,
+ .setkey = des_setkey_skcipher,
+ .encrypt = ctr_des_crypt,
+ .decrypt = ctr_des_crypt,
+ .chunksize = DES_BLOCK_SIZE,
};

-static int ctr_des3_encrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
-{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, CPACF_KMCTR_TDEA_192, &walk);
-}
-
-static int ctr_des3_decrypt(struct blkcipher_desc *desc,
- struct scatterlist *dst, struct scatterlist *src,
- unsigned int nbytes)
+static int ctr_des3_crypt(struct skcipher_request *req)
{
- struct blkcipher_walk walk;
-
- blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, CPACF_KMCTR_TDEA_192 | CPACF_DECRYPT,
- &walk);
+ return ctr_desall_crypt(req, CPACF_KMCTR_TDEA_192);
}

-static struct crypto_alg ctr_des3_alg = {
- .cra_name = "ctr(des3_ede)",
- .cra_driver_name = "ctr-des3_ede-s390",
- .cra_priority = 400, /* combo: des3 + ede */
- .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
- .cra_blocksize = 1,
- .cra_ctxsize = sizeof(struct s390_des_ctx),
- .cra_type = &crypto_blkcipher_type,
- .cra_module = THIS_MODULE,
- .cra_u = {
- .blkcipher = {
- .min_keysize = DES3_KEY_SIZE,
- .max_keysize = DES3_KEY_SIZE,
- .ivsize = DES_BLOCK_SIZE,
- .setkey = des3_setkey,
- .encrypt = ctr_des3_encrypt,
- .decrypt = ctr_des3_decrypt,
- }
- }
+static struct skcipher_alg ctr_des3_alg = {
+ .base.cra_name = "ctr(des3_ede)",
+ .base.cra_driver_name = "ctr-des3_ede-s390",
+ .base.cra_priority = 400, /* combo: des3 + ede */
+ .base.cra_blocksize = 1,
+ .base.cra_ctxsize = sizeof(struct s390_des_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = DES3_KEY_SIZE,
+ .max_keysize = DES3_KEY_SIZE,
+ .ivsize = DES_BLOCK_SIZE,
+ .setkey = des3_setkey_skcipher,
+ .encrypt = ctr_des3_crypt,
+ .decrypt = ctr_des3_crypt,
+ .chunksize = DES_BLOCK_SIZE,
};

-static struct crypto_alg *des_s390_algs_ptr[8];
+static struct crypto_alg *des_s390_algs_ptr[2];
static int des_s390_algs_num;
+static struct skcipher_alg *des_s390_skciphers_ptr[6];
+static int des_s390_skciphers_num;

static int des_s390_register_alg(struct crypto_alg *alg)
{
@@ -497,10 +410,22 @@ static int des_s390_register_alg(struct crypto_alg *alg)
return ret;
}

+static int des_s390_register_skcipher(struct skcipher_alg *alg)
+{
+ int ret;
+
+ ret = crypto_register_skcipher(alg);
+ if (!ret)
+ des_s390_skciphers_ptr[des_s390_skciphers_num++] = alg;
+ return ret;
+}
+
static void des_s390_exit(void)
{
while (des_s390_algs_num--)
crypto_unregister_alg(des_s390_algs_ptr[des_s390_algs_num]);
+ while (des_s390_skciphers_num--)
+ crypto_unregister_skcipher(des_s390_skciphers_ptr[des_s390_skciphers_num]);
if (ctrblk)
free_page((unsigned long) ctrblk);
}
@@ -518,12 +443,12 @@ static int __init des_s390_init(void)
ret = des_s390_register_alg(&des_alg);
if (ret)
goto out_err;
- ret = des_s390_register_alg(&ecb_des_alg);
+ ret = des_s390_register_skcipher(&ecb_des_alg);
if (ret)
goto out_err;
}
if (cpacf_test_func(&kmc_functions, CPACF_KMC_DEA)) {
- ret = des_s390_register_alg(&cbc_des_alg);
+ ret = des_s390_register_skcipher(&cbc_des_alg);
if (ret)
goto out_err;
}
@@ -531,12 +456,12 @@ static int __init des_s390_init(void)
ret = des_s390_register_alg(&des3_alg);
if (ret)
goto out_err;
- ret = des_s390_register_alg(&ecb_des3_alg);
+ ret = des_s390_register_skcipher(&ecb_des3_alg);
if (ret)
goto out_err;
}
if (cpacf_test_func(&kmc_functions, CPACF_KMC_TDEA_192)) {
- ret = des_s390_register_alg(&cbc_des3_alg);
+ ret = des_s390_register_skcipher(&cbc_des3_alg);
if (ret)
goto out_err;
}
@@ -551,12 +476,12 @@ static int __init des_s390_init(void)
}

if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_DEA)) {
- ret = des_s390_register_alg(&ctr_des_alg);
+ ret = des_s390_register_skcipher(&ctr_des_alg);
if (ret)
goto out_err;
}
if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_TDEA_192)) {
- ret = des_s390_register_alg(&ctr_des3_alg);
+ ret = des_s390_register_skcipher(&ctr_des3_alg);
if (ret)
goto out_err;
}
--
2.23.0

2019-10-14 09:42:49

by Harald Freudenberger

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On 12.10.19 22:18, Eric Biggers wrote:
> This series converts the glue code for the S390 CPACF implementations of
> AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
> "skcipher" API. This is needed in order for the blkcipher API to be
> removed.
>
> I've compiled this patchset, and the conversion is very similar to that
> which has been done for many other crypto drivers. But I don't have the
> hardware to test it, nor is S390 CPACF supported by QEMU. So I really
> need someone with the hardware to test it. You can do so by setting:
>
> CONFIG_CRYPTO_HW=y
> CONFIG_ZCRYPT=y
> CONFIG_PKEY=y
> CONFIG_CRYPTO_AES_S390=y
> CONFIG_CRYPTO_PAES_S390=y
> CONFIG_CRYPTO_DES_S390=y
> # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
> CONFIG_DEBUG_KERNEL=y
> CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
> CONFIG_CRYPTO_AES=y
> CONFIG_CRYPTO_DES=y
> CONFIG_CRYPTO_CBC=y
> CONFIG_CRYPTO_CTR=y
> CONFIG_CRYPTO_ECB=y
> CONFIG_CRYPTO_XTS=y
>
> Then boot and check for crypto self-test failures by running
> 'dmesg | grep alg'.
>
> If there are test failures, please also check whether they were already
> failing prior to this patchset.
>
> This won't cover the "paes" ("protected key AES") algorithms, however,
> since those don't have self-tests. If anyone has any way to test those,
> please do so.
>
> Eric Biggers (3):
> crypto: s390/aes - convert to skcipher API
> crypto: s390/paes - convert to skcipher API
> crypto: s390/des - convert to skcipher API
>
> arch/s390/crypto/aes_s390.c | 609 ++++++++++++++---------------------
> arch/s390/crypto/des_s390.c | 419 ++++++++++--------------
> arch/s390/crypto/paes_s390.c | 414 ++++++++++--------------
> 3 files changed, 580 insertions(+), 862 deletions(-)
>
Thanks Eric, I'll do these tests and give you feedback.

2019-10-14 13:04:59

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On Sat, 12 Oct 2019 at 22:20, Eric Biggers <[email protected]> wrote:
>
> This series converts the glue code for the S390 CPACF implementations of
> AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
> "skcipher" API. This is needed in order for the blkcipher API to be
> removed.
>
> I've compiled this patchset, and the conversion is very similar to that
> which has been done for many other crypto drivers. But I don't have the
> hardware to test it, nor is S390 CPACF supported by QEMU. So I really
> need someone with the hardware to test it. You can do so by setting:
>
> CONFIG_CRYPTO_HW=y
> CONFIG_ZCRYPT=y
> CONFIG_PKEY=y
> CONFIG_CRYPTO_AES_S390=y
> CONFIG_CRYPTO_PAES_S390=y
> CONFIG_CRYPTO_DES_S390=y
> # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
> CONFIG_DEBUG_KERNEL=y
> CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
> CONFIG_CRYPTO_AES=y
> CONFIG_CRYPTO_DES=y
> CONFIG_CRYPTO_CBC=y
> CONFIG_CRYPTO_CTR=y
> CONFIG_CRYPTO_ECB=y
> CONFIG_CRYPTO_XTS=y
>
> Then boot and check for crypto self-test failures by running
> 'dmesg | grep alg'.
>
> If there are test failures, please also check whether they were already
> failing prior to this patchset.
>
> This won't cover the "paes" ("protected key AES") algorithms, however,
> since those don't have self-tests. If anyone has any way to test those,
> please do so.
>
> Eric Biggers (3):
> crypto: s390/aes - convert to skcipher API
> crypto: s390/paes - convert to skcipher API
> crypto: s390/des - convert to skcipher API
>

These look fine to me:

Reviewed-by: Ard Biesheuvel <[email protected]>

but i cannot test them either.

2019-10-15 11:29:50

by Harald Freudenberger

[permalink] [raw]
Subject: Re: [RFT PATCH 3/3] crypto: s390/des - convert to skcipher API

On 12.10.19 22:18, Eric Biggers wrote:
> From: Eric Biggers <[email protected]>
>
> Convert the glue code for the S390 CPACF implementations of DES-ECB,
> DES-CBC, DES-CTR, 3DES-ECB, 3DES-CBC, and 3DES-CTR from the deprecated
> "blkcipher" API to the "skcipher" API. This is needed in order for the
> blkcipher API to be removed.
>
> Note: I made CTR use the same function for encryption and decryption,
> since CTR encryption and decryption are identical.
>
> Signed-off-by: Eric Biggers <[email protected]>
> ---
> arch/s390/crypto/des_s390.c | 419 +++++++++++++++---------------------
> 1 file changed, 172 insertions(+), 247 deletions(-)
>
> diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c
> index 439b100c6f2e..bfbafd35bcbd 100644
> --- a/arch/s390/crypto/des_s390.c
> +++ b/arch/s390/crypto/des_s390.c
> @@ -17,6 +17,7 @@
> #include <linux/mutex.h>
> #include <crypto/algapi.h>
> #include <crypto/internal/des.h>
> +#include <crypto/internal/skcipher.h>
> #include <asm/cpacf.h>
>
> #define DES3_KEY_SIZE (3 * DES_KEY_SIZE)
> @@ -45,6 +46,12 @@ static int des_setkey(struct crypto_tfm *tfm, const u8 *key,
> return 0;
> }
>
> +static int des_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key,
> + unsigned int key_len)
> +{
> + return des_setkey(crypto_skcipher_tfm(tfm), key, key_len);
> +}
> +
> static void s390_des_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
> {
> struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
> @@ -79,28 +86,30 @@ static struct crypto_alg des_alg = {
> }
> };
>
> -static int ecb_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
> - struct blkcipher_walk *walk)
> +static int ecb_desall_crypt(struct skcipher_request *req, unsigned long fc)
> {
> - struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_des_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int nbytes, n;
> int ret;
>
> - ret = blkcipher_walk_virt(desc, walk);
> - while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
> + ret = skcipher_walk_virt(&walk, req, false);
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(DES_BLOCK_SIZE - 1);
> - cpacf_km(fc, ctx->key, walk->dst.virt.addr,
> - walk->src.virt.addr, n);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + cpacf_km(fc, ctx->key, walk.dst.virt.addr,
> + walk.src.virt.addr, n);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> return ret;
> }
>
> -static int cbc_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
> - struct blkcipher_walk *walk)
> +static int cbc_desall_crypt(struct skcipher_request *req, unsigned long fc)
> {
> - struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_des_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int nbytes, n;
> int ret;
> struct {
> @@ -108,99 +117,69 @@ static int cbc_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
> u8 key[DES3_KEY_SIZE];
> } param;
>
> - ret = blkcipher_walk_virt(desc, walk);
> - memcpy(param.iv, walk->iv, DES_BLOCK_SIZE);
> + ret = skcipher_walk_virt(&walk, req, false);
> + if (ret)
> + return ret;
> + memcpy(param.iv, walk.iv, DES_BLOCK_SIZE);
> memcpy(param.key, ctx->key, DES3_KEY_SIZE);
> - while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(DES_BLOCK_SIZE - 1);
> - cpacf_kmc(fc, &param, walk->dst.virt.addr,
> - walk->src.virt.addr, n);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + cpacf_kmc(fc, &param, walk.dst.virt.addr,
> + walk.src.virt.addr, n);
> + memcpy(walk.iv, param.iv, DES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> - memcpy(walk->iv, param.iv, DES_BLOCK_SIZE);
> return ret;
> }
>
> -static int ecb_des_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_des_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_desall_crypt(desc, CPACF_KM_DEA, &walk);
> + return ecb_desall_crypt(req, CPACF_KM_DEA);
> }
>
> -static int ecb_des_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_des_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_desall_crypt(desc, CPACF_KM_DEA | CPACF_DECRYPT, &walk);
> + return ecb_desall_crypt(req, CPACF_KM_DEA | CPACF_DECRYPT);
> }
>
> -static struct crypto_alg ecb_des_alg = {
> - .cra_name = "ecb(des)",
> - .cra_driver_name = "ecb-des-s390",
> - .cra_priority = 400, /* combo: des + ecb */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = DES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_des_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = DES_KEY_SIZE,
> - .max_keysize = DES_KEY_SIZE,
> - .setkey = des_setkey,
> - .encrypt = ecb_des_encrypt,
> - .decrypt = ecb_des_decrypt,
> - }
> - }
> +static struct skcipher_alg ecb_des_alg = {
> + .base.cra_name = "ecb(des)",
> + .base.cra_driver_name = "ecb-des-s390",
> + .base.cra_priority = 400, /* combo: des + ecb */
> + .base.cra_blocksize = DES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_des_ctx),
> + .base.cra_module = THIS_MODULE,
> + .min_keysize = DES_KEY_SIZE,
> + .max_keysize = DES_KEY_SIZE,
> + .setkey = des_setkey_skcipher,
> + .encrypt = ecb_des_encrypt,
> + .decrypt = ecb_des_decrypt,
> };
>
> -static int cbc_des_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_des_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_desall_crypt(desc, CPACF_KMC_DEA, &walk);
> + return cbc_desall_crypt(req, CPACF_KMC_DEA);
> }
>
> -static int cbc_des_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_des_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_desall_crypt(desc, CPACF_KMC_DEA | CPACF_DECRYPT, &walk);
> + return cbc_desall_crypt(req, CPACF_KMC_DEA | CPACF_DECRYPT);
> }
>
> -static struct crypto_alg cbc_des_alg = {
> - .cra_name = "cbc(des)",
> - .cra_driver_name = "cbc-des-s390",
> - .cra_priority = 400, /* combo: des + cbc */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = DES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_des_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = DES_KEY_SIZE,
> - .max_keysize = DES_KEY_SIZE,
> - .ivsize = DES_BLOCK_SIZE,
> - .setkey = des_setkey,
> - .encrypt = cbc_des_encrypt,
> - .decrypt = cbc_des_decrypt,
> - }
> - }
> +static struct skcipher_alg cbc_des_alg = {
> + .base.cra_name = "cbc(des)",
> + .base.cra_driver_name = "cbc-des-s390",
> + .base.cra_priority = 400, /* combo: des + cbc */
> + .base.cra_blocksize = DES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_des_ctx),
> + .base.cra_module = THIS_MODULE,
> + .min_keysize = DES_KEY_SIZE,
> + .max_keysize = DES_KEY_SIZE,
> + .ivsize = DES_BLOCK_SIZE,
> + .setkey = des_setkey_skcipher,
> + .encrypt = cbc_des_encrypt,
> + .decrypt = cbc_des_decrypt,
> };
>
> /*
> @@ -232,6 +211,12 @@ static int des3_setkey(struct crypto_tfm *tfm, const u8 *key,
> return 0;
> }
>
> +static int des3_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key,
> + unsigned int key_len)
> +{
> + return des3_setkey(crypto_skcipher_tfm(tfm), key, key_len);
> +}
> +
> static void des3_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
> {
> struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
> @@ -266,87 +251,53 @@ static struct crypto_alg des3_alg = {
> }
> };
>
> -static int ecb_des3_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_des3_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_desall_crypt(desc, CPACF_KM_TDEA_192, &walk);
> + return ecb_desall_crypt(req, CPACF_KM_TDEA_192);
> }
>
> -static int ecb_des3_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_des3_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_desall_crypt(desc, CPACF_KM_TDEA_192 | CPACF_DECRYPT,
> - &walk);
> + return ecb_desall_crypt(req, CPACF_KM_TDEA_192 | CPACF_DECRYPT);
> }
>
> -static struct crypto_alg ecb_des3_alg = {
> - .cra_name = "ecb(des3_ede)",
> - .cra_driver_name = "ecb-des3_ede-s390",
> - .cra_priority = 400, /* combo: des3 + ecb */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = DES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_des_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = DES3_KEY_SIZE,
> - .max_keysize = DES3_KEY_SIZE,
> - .setkey = des3_setkey,
> - .encrypt = ecb_des3_encrypt,
> - .decrypt = ecb_des3_decrypt,
> - }
> - }
> +static struct skcipher_alg ecb_des3_alg = {
> + .base.cra_name = "ecb(des3_ede)",
> + .base.cra_driver_name = "ecb-des3_ede-s390",
> + .base.cra_priority = 400, /* combo: des3 + ecb */
> + .base.cra_blocksize = DES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_des_ctx),
> + .base.cra_module = THIS_MODULE,
> + .min_keysize = DES3_KEY_SIZE,
> + .max_keysize = DES3_KEY_SIZE,
> + .setkey = des3_setkey_skcipher,
> + .encrypt = ecb_des3_encrypt,
> + .decrypt = ecb_des3_decrypt,
> };
>
> -static int cbc_des3_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_des3_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_desall_crypt(desc, CPACF_KMC_TDEA_192, &walk);
> + return cbc_desall_crypt(req, CPACF_KMC_TDEA_192);
> }
>
> -static int cbc_des3_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_des3_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_desall_crypt(desc, CPACF_KMC_TDEA_192 | CPACF_DECRYPT,
> - &walk);
> + return cbc_desall_crypt(req, CPACF_KMC_TDEA_192 | CPACF_DECRYPT);
> }
>
> -static struct crypto_alg cbc_des3_alg = {
> - .cra_name = "cbc(des3_ede)",
> - .cra_driver_name = "cbc-des3_ede-s390",
> - .cra_priority = 400, /* combo: des3 + cbc */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = DES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_des_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = DES3_KEY_SIZE,
> - .max_keysize = DES3_KEY_SIZE,
> - .ivsize = DES_BLOCK_SIZE,
> - .setkey = des3_setkey,
> - .encrypt = cbc_des3_encrypt,
> - .decrypt = cbc_des3_decrypt,
> - }
> - }
> +static struct skcipher_alg cbc_des3_alg = {
> + .base.cra_name = "cbc(des3_ede)",
> + .base.cra_driver_name = "cbc-des3_ede-s390",
> + .base.cra_priority = 400, /* combo: des3 + cbc */
> + .base.cra_blocksize = DES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_des_ctx),
> + .base.cra_module = THIS_MODULE,
> + .min_keysize = DES3_KEY_SIZE,
> + .max_keysize = DES3_KEY_SIZE,
> + .ivsize = DES_BLOCK_SIZE,
> + .setkey = des3_setkey_skcipher,
> + .encrypt = cbc_des3_encrypt,
> + .decrypt = cbc_des3_decrypt,
> };
>
> static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
> @@ -364,128 +315,90 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
> return n;
> }
>
> -static int ctr_desall_crypt(struct blkcipher_desc *desc, unsigned long fc,
> - struct blkcipher_walk *walk)
> +static int ctr_desall_crypt(struct skcipher_request *req, unsigned long fc)
> {
> - struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_des_ctx *ctx = crypto_skcipher_ctx(tfm);
> u8 buf[DES_BLOCK_SIZE], *ctrptr;
> + struct skcipher_walk walk;
> unsigned int n, nbytes;
> int ret, locked;
>
> locked = mutex_trylock(&ctrblk_lock);
>
> - ret = blkcipher_walk_virt_block(desc, walk, DES_BLOCK_SIZE);
> - while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
> + ret = skcipher_walk_virt(&walk, req, false);
> + while ((nbytes = walk.nbytes) >= DES_BLOCK_SIZE) {
> n = DES_BLOCK_SIZE;
> if (nbytes >= 2*DES_BLOCK_SIZE && locked)
> - n = __ctrblk_init(ctrblk, walk->iv, nbytes);
> - ctrptr = (n > DES_BLOCK_SIZE) ? ctrblk : walk->iv;
> - cpacf_kmctr(fc, ctx->key, walk->dst.virt.addr,
> - walk->src.virt.addr, n, ctrptr);
> + n = __ctrblk_init(ctrblk, walk.iv, nbytes);
> + ctrptr = (n > DES_BLOCK_SIZE) ? ctrblk : walk.iv;
> + cpacf_kmctr(fc, ctx->key, walk.dst.virt.addr,
> + walk.src.virt.addr, n, ctrptr);
> if (ctrptr == ctrblk)
> - memcpy(walk->iv, ctrptr + n - DES_BLOCK_SIZE,
> + memcpy(walk.iv, ctrptr + n - DES_BLOCK_SIZE,
> DES_BLOCK_SIZE);
> - crypto_inc(walk->iv, DES_BLOCK_SIZE);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + crypto_inc(walk.iv, DES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> if (locked)
> mutex_unlock(&ctrblk_lock);
> /* final block may be < DES_BLOCK_SIZE, copy only nbytes */
> if (nbytes) {
> - cpacf_kmctr(fc, ctx->key, buf, walk->src.virt.addr,
> - DES_BLOCK_SIZE, walk->iv);
> - memcpy(walk->dst.virt.addr, buf, nbytes);
> - crypto_inc(walk->iv, DES_BLOCK_SIZE);
> - ret = blkcipher_walk_done(desc, walk, 0);
> + cpacf_kmctr(fc, ctx->key, buf, walk.src.virt.addr,
> + DES_BLOCK_SIZE, walk.iv);
> + memcpy(walk.dst.virt.addr, buf, nbytes);
> + crypto_inc(walk.iv, DES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, 0);
> }
> return ret;
> }
>
> -static int ctr_des_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_desall_crypt(desc, CPACF_KMCTR_DEA, &walk);
> -}
> -
> -static int ctr_des_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ctr_des_crypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_desall_crypt(desc, CPACF_KMCTR_DEA | CPACF_DECRYPT, &walk);
> + return ctr_desall_crypt(req, CPACF_KMCTR_DEA);
> }
>
> -static struct crypto_alg ctr_des_alg = {
> - .cra_name = "ctr(des)",
> - .cra_driver_name = "ctr-des-s390",
> - .cra_priority = 400, /* combo: des + ctr */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = 1,
> - .cra_ctxsize = sizeof(struct s390_des_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = DES_KEY_SIZE,
> - .max_keysize = DES_KEY_SIZE,
> - .ivsize = DES_BLOCK_SIZE,
> - .setkey = des_setkey,
> - .encrypt = ctr_des_encrypt,
> - .decrypt = ctr_des_decrypt,
> - }
> - }
> +static struct skcipher_alg ctr_des_alg = {
> + .base.cra_name = "ctr(des)",
> + .base.cra_driver_name = "ctr-des-s390",
> + .base.cra_priority = 400, /* combo: des + ctr */
> + .base.cra_blocksize = 1,
> + .base.cra_ctxsize = sizeof(struct s390_des_ctx),
> + .base.cra_module = THIS_MODULE,
> + .min_keysize = DES_KEY_SIZE,
> + .max_keysize = DES_KEY_SIZE,
> + .ivsize = DES_BLOCK_SIZE,
> + .setkey = des_setkey_skcipher,
> + .encrypt = ctr_des_crypt,
> + .decrypt = ctr_des_crypt,
> + .chunksize = DES_BLOCK_SIZE,
> };
>
> -static int ctr_des3_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_desall_crypt(desc, CPACF_KMCTR_TDEA_192, &walk);
> -}
> -
> -static int ctr_des3_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ctr_des3_crypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_desall_crypt(desc, CPACF_KMCTR_TDEA_192 | CPACF_DECRYPT,
> - &walk);
> + return ctr_desall_crypt(req, CPACF_KMCTR_TDEA_192);
> }
>
> -static struct crypto_alg ctr_des3_alg = {
> - .cra_name = "ctr(des3_ede)",
> - .cra_driver_name = "ctr-des3_ede-s390",
> - .cra_priority = 400, /* combo: des3 + ede */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = 1,
> - .cra_ctxsize = sizeof(struct s390_des_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = DES3_KEY_SIZE,
> - .max_keysize = DES3_KEY_SIZE,
> - .ivsize = DES_BLOCK_SIZE,
> - .setkey = des3_setkey,
> - .encrypt = ctr_des3_encrypt,
> - .decrypt = ctr_des3_decrypt,
> - }
> - }
> +static struct skcipher_alg ctr_des3_alg = {
> + .base.cra_name = "ctr(des3_ede)",
> + .base.cra_driver_name = "ctr-des3_ede-s390",
> + .base.cra_priority = 400, /* combo: des3 + ede */
> + .base.cra_blocksize = 1,
> + .base.cra_ctxsize = sizeof(struct s390_des_ctx),
> + .base.cra_module = THIS_MODULE,
> + .min_keysize = DES3_KEY_SIZE,
> + .max_keysize = DES3_KEY_SIZE,
> + .ivsize = DES_BLOCK_SIZE,
> + .setkey = des3_setkey_skcipher,
> + .encrypt = ctr_des3_crypt,
> + .decrypt = ctr_des3_crypt,
> + .chunksize = DES_BLOCK_SIZE,
> };
>
> -static struct crypto_alg *des_s390_algs_ptr[8];
> +static struct crypto_alg *des_s390_algs_ptr[2];
> static int des_s390_algs_num;
> +static struct skcipher_alg *des_s390_skciphers_ptr[6];
> +static int des_s390_skciphers_num;
>
> static int des_s390_register_alg(struct crypto_alg *alg)
> {
> @@ -497,10 +410,22 @@ static int des_s390_register_alg(struct crypto_alg *alg)
> return ret;
> }
>
> +static int des_s390_register_skcipher(struct skcipher_alg *alg)
> +{
> + int ret;
> +
> + ret = crypto_register_skcipher(alg);
> + if (!ret)
> + des_s390_skciphers_ptr[des_s390_skciphers_num++] = alg;
> + return ret;
> +}
> +
> static void des_s390_exit(void)
> {
> while (des_s390_algs_num--)
> crypto_unregister_alg(des_s390_algs_ptr[des_s390_algs_num]);
> + while (des_s390_skciphers_num--)
> + crypto_unregister_skcipher(des_s390_skciphers_ptr[des_s390_skciphers_num]);
> if (ctrblk)
> free_page((unsigned long) ctrblk);
> }
> @@ -518,12 +443,12 @@ static int __init des_s390_init(void)
> ret = des_s390_register_alg(&des_alg);
> if (ret)
> goto out_err;
> - ret = des_s390_register_alg(&ecb_des_alg);
> + ret = des_s390_register_skcipher(&ecb_des_alg);
> if (ret)
> goto out_err;
> }
> if (cpacf_test_func(&kmc_functions, CPACF_KMC_DEA)) {
> - ret = des_s390_register_alg(&cbc_des_alg);
> + ret = des_s390_register_skcipher(&cbc_des_alg);
> if (ret)
> goto out_err;
> }
> @@ -531,12 +456,12 @@ static int __init des_s390_init(void)
> ret = des_s390_register_alg(&des3_alg);
> if (ret)
> goto out_err;
> - ret = des_s390_register_alg(&ecb_des3_alg);
> + ret = des_s390_register_skcipher(&ecb_des3_alg);
> if (ret)
> goto out_err;
> }
> if (cpacf_test_func(&kmc_functions, CPACF_KMC_TDEA_192)) {
> - ret = des_s390_register_alg(&cbc_des3_alg);
> + ret = des_s390_register_skcipher(&cbc_des3_alg);
> if (ret)
> goto out_err;
> }
> @@ -551,12 +476,12 @@ static int __init des_s390_init(void)
> }
>
> if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_DEA)) {
> - ret = des_s390_register_alg(&ctr_des_alg);
> + ret = des_s390_register_skcipher(&ctr_des_alg);
> if (ret)
> goto out_err;
> }
> if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_TDEA_192)) {
> - ret = des_s390_register_alg(&ctr_des3_alg);
> + ret = des_s390_register_skcipher(&ctr_des3_alg);
> if (ret)
> goto out_err;
> }

Tested with extended selftests and own tests via AF_ALG interface, works.
Thanks for this great work.
reviewed-by: Harald Freudenberger <[email protected]>

2019-10-15 12:11:13

by Harald Freudenberger

[permalink] [raw]
Subject: Re: [RFT PATCH 1/3] crypto: s390/aes - convert to skcipher API

On 12.10.19 22:18, Eric Biggers wrote:
> From: Eric Biggers <[email protected]>
>
> Convert the glue code for the S390 CPACF implementations of AES-ECB,
> AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the
> "skcipher" API. This is needed in order for the blkcipher API to be
> removed.
>
> Note: I made CTR use the same function for encryption and decryption,
> since CTR encryption and decryption are identical.
>
> Signed-off-by: Eric Biggers <[email protected]>
> ---
> arch/s390/crypto/aes_s390.c | 609 ++++++++++++++----------------------
> 1 file changed, 234 insertions(+), 375 deletions(-)
>
> diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
> index 9803e96d2924..ead0b2c9881d 100644
> --- a/arch/s390/crypto/aes_s390.c
> +++ b/arch/s390/crypto/aes_s390.c
> @@ -44,7 +44,7 @@ struct s390_aes_ctx {
> int key_len;
> unsigned long fc;
> union {
> - struct crypto_sync_skcipher *blk;
> + struct crypto_skcipher *skcipher;
> struct crypto_cipher *cip;
> } fallback;
> };
> @@ -54,7 +54,7 @@ struct s390_xts_ctx {
> u8 pcc_key[32];
> int key_len;
> unsigned long fc;
> - struct crypto_sync_skcipher *fallback;
> + struct crypto_skcipher *fallback;
> };
>
> struct gcm_sg_walk {
> @@ -178,66 +178,41 @@ static struct crypto_alg aes_alg = {
> }
> };
>
> -static int setkey_fallback_blk(struct crypto_tfm *tfm, const u8 *key,
> - unsigned int len)
> +static int setkey_fallback_skcipher(struct crypto_skcipher *tfm, const u8 *key,
> + unsigned int len)
> {
> - struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
> - unsigned int ret;
> -
> - crypto_sync_skcipher_clear_flags(sctx->fallback.blk,
> - CRYPTO_TFM_REQ_MASK);
> - crypto_sync_skcipher_set_flags(sctx->fallback.blk, tfm->crt_flags &
> - CRYPTO_TFM_REQ_MASK);
> -
> - ret = crypto_sync_skcipher_setkey(sctx->fallback.blk, key, len);
> -
> - tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
> - tfm->crt_flags |= crypto_sync_skcipher_get_flags(sctx->fallback.blk) &
> - CRYPTO_TFM_RES_MASK;
> -
> - return ret;
> -}
> -
> -static int fallback_blk_dec(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - unsigned int ret;
> - struct crypto_blkcipher *tfm = desc->tfm;
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm);
> - SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
> -
> - skcipher_request_set_sync_tfm(req, sctx->fallback.blk);
> - skcipher_request_set_callback(req, desc->flags, NULL, NULL);
> - skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
> -
> - ret = crypto_skcipher_decrypt(req);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> + int ret;
>
> - skcipher_request_zero(req);
> + crypto_skcipher_clear_flags(sctx->fallback.skcipher,
> + CRYPTO_TFM_REQ_MASK);
> + crypto_skcipher_set_flags(sctx->fallback.skcipher,
> + crypto_skcipher_get_flags(tfm) &
> + CRYPTO_TFM_REQ_MASK);
> + ret = crypto_skcipher_setkey(sctx->fallback.skcipher, key, len);
> + crypto_skcipher_set_flags(tfm,
> + crypto_skcipher_get_flags(sctx->fallback.skcipher) &
> + CRYPTO_TFM_RES_MASK);
> return ret;
> }
>
> -static int fallback_blk_enc(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int fallback_skcipher_crypt(struct s390_aes_ctx *sctx,
> + struct skcipher_request *req,
> + unsigned long modifier)
> {
> - unsigned int ret;
> - struct crypto_blkcipher *tfm = desc->tfm;
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm);
> - SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
> -
> - skcipher_request_set_sync_tfm(req, sctx->fallback.blk);
> - skcipher_request_set_callback(req, desc->flags, NULL, NULL);
> - skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
> + struct skcipher_request *subreq = skcipher_request_ctx(req);
>
> - ret = crypto_skcipher_encrypt(req);
> - return ret;
> + *subreq = *req;
> + skcipher_request_set_tfm(subreq, sctx->fallback.skcipher);
> + return (modifier & CPACF_DECRYPT) ?
> + crypto_skcipher_decrypt(subreq) :
> + crypto_skcipher_encrypt(subreq);
> }
>
> -static int ecb_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int ecb_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> - struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> unsigned long fc;
>
> /* Pick the correct function code based on the key length */
> @@ -248,111 +223,92 @@ static int ecb_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> /* Check if the function code is available */
> sctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
> if (!sctx->fc)
> - return setkey_fallback_blk(tfm, in_key, key_len);
> + return setkey_fallback_skcipher(tfm, in_key, key_len);
>
> sctx->key_len = key_len;
> memcpy(sctx->key, in_key, key_len);
> return 0;
> }
>
> -static int ecb_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int ecb_aes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int nbytes, n;
> int ret;
>
> - ret = blkcipher_walk_virt(desc, walk);
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + if (unlikely(!sctx->fc))
> + return fallback_skcipher_crypt(sctx, req, modifier);
> +
> + ret = skcipher_walk_virt(&walk, req, false);
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(AES_BLOCK_SIZE - 1);
> cpacf_km(sctx->fc | modifier, sctx->key,
> - walk->dst.virt.addr, walk->src.virt.addr, n);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + walk.dst.virt.addr, walk.src.virt.addr, n);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> -
> return ret;
> }
>
> -static int ecb_aes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_aes_encrypt(struct skcipher_request *req)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (unlikely(!sctx->fc))
> - return fallback_blk_enc(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_aes_crypt(desc, 0, &walk);
> + return ecb_aes_crypt(req, 0);
> }
>
> -static int ecb_aes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_aes_decrypt(struct skcipher_request *req)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (unlikely(!sctx->fc))
> - return fallback_blk_dec(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_aes_crypt(desc, CPACF_DECRYPT, &walk);
> + return ecb_aes_crypt(req, CPACF_DECRYPT);
> }
>
> -static int fallback_init_blk(struct crypto_tfm *tfm)
> +static int fallback_init_skcipher(struct crypto_skcipher *tfm)
> {
> - const char *name = tfm->__crt_alg->cra_name;
> - struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
> + const char *name = crypto_tfm_alg_name(&tfm->base);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
>
> - sctx->fallback.blk = crypto_alloc_sync_skcipher(name, 0,
> - CRYPTO_ALG_NEED_FALLBACK);
> + sctx->fallback.skcipher = crypto_alloc_skcipher(name, 0,
> + CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC);
>
> - if (IS_ERR(sctx->fallback.blk)) {
> + if (IS_ERR(sctx->fallback.skcipher)) {
> pr_err("Allocating AES fallback algorithm %s failed\n",
> name);
> - return PTR_ERR(sctx->fallback.blk);
> + return PTR_ERR(sctx->fallback.skcipher);
> }
>
> + crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
> + crypto_skcipher_reqsize(sctx->fallback.skcipher));
> return 0;
> }
>
> -static void fallback_exit_blk(struct crypto_tfm *tfm)
> +static void fallback_exit_skcipher(struct crypto_skcipher *tfm)
> {
> - struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
>
> - crypto_free_sync_skcipher(sctx->fallback.blk);
> + crypto_free_skcipher(sctx->fallback.skcipher);
> }
>
> -static struct crypto_alg ecb_aes_alg = {
> - .cra_name = "ecb(aes)",
> - .cra_driver_name = "ecb-aes-s390",
> - .cra_priority = 401, /* combo: aes + ecb + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
> - CRYPTO_ALG_NEED_FALLBACK,
> - .cra_blocksize = AES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_aes_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_init = fallback_init_blk,
> - .cra_exit = fallback_exit_blk,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = AES_MIN_KEY_SIZE,
> - .max_keysize = AES_MAX_KEY_SIZE,
> - .setkey = ecb_aes_set_key,
> - .encrypt = ecb_aes_encrypt,
> - .decrypt = ecb_aes_decrypt,
> - }
> - }
> +static struct skcipher_alg ecb_aes_alg = {
> + .base.cra_name = "ecb(aes)",
> + .base.cra_driver_name = "ecb-aes-s390",
> + .base.cra_priority = 401, /* combo: aes + ecb + 1 */
> + .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
> + .base.cra_blocksize = AES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_aes_ctx),
> + .base.cra_module = THIS_MODULE,
> + .init = fallback_init_skcipher,
> + .exit = fallback_exit_skcipher,
> + .min_keysize = AES_MIN_KEY_SIZE,
> + .max_keysize = AES_MAX_KEY_SIZE,
> + .setkey = ecb_aes_set_key,
> + .encrypt = ecb_aes_encrypt,
> + .decrypt = ecb_aes_decrypt,
> };
>
> -static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int cbc_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> - struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> unsigned long fc;
>
> /* Pick the correct function code based on the key length */
> @@ -363,17 +319,18 @@ static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> /* Check if the function code is available */
> sctx->fc = (fc && cpacf_test_func(&kmc_functions, fc)) ? fc : 0;
> if (!sctx->fc)
> - return setkey_fallback_blk(tfm, in_key, key_len);
> + return setkey_fallback_skcipher(tfm, in_key, key_len);
>
> sctx->key_len = key_len;
> memcpy(sctx->key, in_key, key_len);
> return 0;
> }
>
> -static int cbc_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int cbc_aes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int nbytes, n;
> int ret;
> struct {
> @@ -381,134 +338,74 @@ static int cbc_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> u8 key[AES_MAX_KEY_SIZE];
> } param;
>
> - ret = blkcipher_walk_virt(desc, walk);
> - memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
> + if (unlikely(!sctx->fc))
> + return fallback_skcipher_crypt(sctx, req, modifier);
> +
> + ret = skcipher_walk_virt(&walk, req, false);
> + if (ret)
> + return ret;
> + memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
> memcpy(param.key, sctx->key, sctx->key_len);
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(AES_BLOCK_SIZE - 1);
> cpacf_kmc(sctx->fc | modifier, &param,
> - walk->dst.virt.addr, walk->src.virt.addr, n);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + walk.dst.virt.addr, walk.src.virt.addr, n);
> + memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> - memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
> return ret;
> }
>
> -static int cbc_aes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_aes_encrypt(struct skcipher_request *req)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (unlikely(!sctx->fc))
> - return fallback_blk_enc(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_aes_crypt(desc, 0, &walk);
> + return cbc_aes_crypt(req, 0);
> }
>
> -static int cbc_aes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_aes_decrypt(struct skcipher_request *req)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (unlikely(!sctx->fc))
> - return fallback_blk_dec(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_aes_crypt(desc, CPACF_DECRYPT, &walk);
> + return cbc_aes_crypt(req, CPACF_DECRYPT);
> }
>
> -static struct crypto_alg cbc_aes_alg = {
> - .cra_name = "cbc(aes)",
> - .cra_driver_name = "cbc-aes-s390",
> - .cra_priority = 402, /* ecb-aes-s390 + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
> - CRYPTO_ALG_NEED_FALLBACK,
> - .cra_blocksize = AES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_aes_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_init = fallback_init_blk,
> - .cra_exit = fallback_exit_blk,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = AES_MIN_KEY_SIZE,
> - .max_keysize = AES_MAX_KEY_SIZE,
> - .ivsize = AES_BLOCK_SIZE,
> - .setkey = cbc_aes_set_key,
> - .encrypt = cbc_aes_encrypt,
> - .decrypt = cbc_aes_decrypt,
> - }
> - }
> +static struct skcipher_alg cbc_aes_alg = {
> + .base.cra_name = "cbc(aes)",
> + .base.cra_driver_name = "cbc-aes-s390",
> + .base.cra_priority = 402, /* ecb-aes-s390 + 1 */
> + .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
> + .base.cra_blocksize = AES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_aes_ctx),
> + .base.cra_module = THIS_MODULE,
> + .init = fallback_init_skcipher,
> + .exit = fallback_exit_skcipher,
> + .min_keysize = AES_MIN_KEY_SIZE,
> + .max_keysize = AES_MAX_KEY_SIZE,
> + .ivsize = AES_BLOCK_SIZE,
> + .setkey = cbc_aes_set_key,
> + .encrypt = cbc_aes_encrypt,
> + .decrypt = cbc_aes_decrypt,
> };
>
> -static int xts_fallback_setkey(struct crypto_tfm *tfm, const u8 *key,
> - unsigned int len)
> -{
> - struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
> - unsigned int ret;
> -
> - crypto_sync_skcipher_clear_flags(xts_ctx->fallback,
> - CRYPTO_TFM_REQ_MASK);
> - crypto_sync_skcipher_set_flags(xts_ctx->fallback, tfm->crt_flags &
> - CRYPTO_TFM_REQ_MASK);
> -
> - ret = crypto_sync_skcipher_setkey(xts_ctx->fallback, key, len);
> -
> - tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
> - tfm->crt_flags |= crypto_sync_skcipher_get_flags(xts_ctx->fallback) &
> - CRYPTO_TFM_RES_MASK;
> -
> - return ret;
> -}
> -
> -static int xts_fallback_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct crypto_blkcipher *tfm = desc->tfm;
> - struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm);
> - SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
> - unsigned int ret;
> -
> - skcipher_request_set_sync_tfm(req, xts_ctx->fallback);
> - skcipher_request_set_callback(req, desc->flags, NULL, NULL);
> - skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
> -
> - ret = crypto_skcipher_decrypt(req);
> -
> - skcipher_request_zero(req);
> - return ret;
> -}
> -
> -static int xts_fallback_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int xts_fallback_setkey(struct crypto_skcipher *tfm, const u8 *key,
> + unsigned int len)
> {
> - struct crypto_blkcipher *tfm = desc->tfm;
> - struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm);
> - SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
> - unsigned int ret;
> -
> - skcipher_request_set_sync_tfm(req, xts_ctx->fallback);
> - skcipher_request_set_callback(req, desc->flags, NULL, NULL);
> - skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
> -
> - ret = crypto_skcipher_encrypt(req);
> + struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
> + int ret;
>
> - skcipher_request_zero(req);
> + crypto_skcipher_clear_flags(xts_ctx->fallback, CRYPTO_TFM_REQ_MASK);
> + crypto_skcipher_set_flags(xts_ctx->fallback,
> + crypto_skcipher_get_flags(tfm) &
> + CRYPTO_TFM_REQ_MASK);
> + ret = crypto_skcipher_setkey(xts_ctx->fallback, key, len);
> + crypto_skcipher_set_flags(tfm,
> + crypto_skcipher_get_flags(xts_ctx->fallback) &
> + CRYPTO_TFM_RES_MASK);
> return ret;
> }
>
> -static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int xts_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> - struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
> + struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
> unsigned long fc;
> int err;
>
> @@ -518,7 +415,7 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>
> /* In fips mode only 128 bit or 256 bit keys are valid */
> if (fips_enabled && key_len != 32 && key_len != 64) {
> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> return -EINVAL;
> }
>
> @@ -539,10 +436,11 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> return 0;
> }
>
> -static int xts_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int xts_aes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> - struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int offset, nbytes, n;
> int ret;
> struct {
> @@ -557,113 +455,100 @@ static int xts_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> u8 init[16];
> } xts_param;
>
> - ret = blkcipher_walk_virt(desc, walk);
> + if (req->cryptlen < AES_BLOCK_SIZE)
> + return -EINVAL;
> +
> + if (unlikely(!xts_ctx->fc || (req->cryptlen % AES_BLOCK_SIZE) != 0)) {
> + struct skcipher_request *subreq = skcipher_request_ctx(req);
> +
> + *subreq = *req;
> + skcipher_request_set_tfm(subreq, xts_ctx->fallback);
> + return (modifier & CPACF_DECRYPT) ?
> + crypto_skcipher_decrypt(subreq) :
> + crypto_skcipher_encrypt(subreq);
> + }
> +
> + ret = skcipher_walk_virt(&walk, req, false);
> + if (ret)
> + return ret;
> offset = xts_ctx->key_len & 0x10;
> memset(pcc_param.block, 0, sizeof(pcc_param.block));
> memset(pcc_param.bit, 0, sizeof(pcc_param.bit));
> memset(pcc_param.xts, 0, sizeof(pcc_param.xts));
> - memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
> + memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
> memcpy(pcc_param.key + offset, xts_ctx->pcc_key, xts_ctx->key_len);
> cpacf_pcc(xts_ctx->fc, pcc_param.key + offset);
>
> memcpy(xts_param.key + offset, xts_ctx->key, xts_ctx->key_len);
> memcpy(xts_param.init, pcc_param.xts, 16);
>
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(AES_BLOCK_SIZE - 1);
> cpacf_km(xts_ctx->fc | modifier, xts_param.key + offset,
> - walk->dst.virt.addr, walk->src.virt.addr, n);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + walk.dst.virt.addr, walk.src.virt.addr, n);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> return ret;
> }
>
> -static int xts_aes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int xts_aes_encrypt(struct skcipher_request *req)
> {
> - struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (!nbytes)
> - return -EINVAL;
> -
> - if (unlikely(!xts_ctx->fc || (nbytes % XTS_BLOCK_SIZE) != 0))
> - return xts_fallback_encrypt(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return xts_aes_crypt(desc, 0, &walk);
> + return xts_aes_crypt(req, 0);
> }
>
> -static int xts_aes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int xts_aes_decrypt(struct skcipher_request *req)
> {
> - struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (!nbytes)
> - return -EINVAL;
> -
> - if (unlikely(!xts_ctx->fc || (nbytes % XTS_BLOCK_SIZE) != 0))
> - return xts_fallback_decrypt(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return xts_aes_crypt(desc, CPACF_DECRYPT, &walk);
> + return xts_aes_crypt(req, CPACF_DECRYPT);
> }
>
> -static int xts_fallback_init(struct crypto_tfm *tfm)
> +static int xts_fallback_init(struct crypto_skcipher *tfm)
> {
> - const char *name = tfm->__crt_alg->cra_name;
> - struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
> + const char *name = crypto_tfm_alg_name(&tfm->base);
> + struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
>
> - xts_ctx->fallback = crypto_alloc_sync_skcipher(name, 0,
> - CRYPTO_ALG_NEED_FALLBACK);
> + xts_ctx->fallback = crypto_alloc_skcipher(name, 0,
> + CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC);
>
> if (IS_ERR(xts_ctx->fallback)) {
> pr_err("Allocating XTS fallback algorithm %s failed\n",
> name);
> return PTR_ERR(xts_ctx->fallback);
> }
> + crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
> + crypto_skcipher_reqsize(xts_ctx->fallback));
> return 0;
> }
>
> -static void xts_fallback_exit(struct crypto_tfm *tfm)
> +static void xts_fallback_exit(struct crypto_skcipher *tfm)
> {
> - struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
> + struct s390_xts_ctx *xts_ctx = crypto_skcipher_ctx(tfm);
>
> - crypto_free_sync_skcipher(xts_ctx->fallback);
> + crypto_free_skcipher(xts_ctx->fallback);
> }
>
> -static struct crypto_alg xts_aes_alg = {
> - .cra_name = "xts(aes)",
> - .cra_driver_name = "xts-aes-s390",
> - .cra_priority = 402, /* ecb-aes-s390 + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
> - CRYPTO_ALG_NEED_FALLBACK,
> - .cra_blocksize = AES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_xts_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_init = xts_fallback_init,
> - .cra_exit = xts_fallback_exit,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = 2 * AES_MIN_KEY_SIZE,
> - .max_keysize = 2 * AES_MAX_KEY_SIZE,
> - .ivsize = AES_BLOCK_SIZE,
> - .setkey = xts_aes_set_key,
> - .encrypt = xts_aes_encrypt,
> - .decrypt = xts_aes_decrypt,
> - }
> - }
> +static struct skcipher_alg xts_aes_alg = {
> + .base.cra_name = "xts(aes)",
> + .base.cra_driver_name = "xts-aes-s390",
> + .base.cra_priority = 402, /* ecb-aes-s390 + 1 */
> + .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
> + .base.cra_blocksize = AES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_xts_ctx),
> + .base.cra_module = THIS_MODULE,
> + .init = xts_fallback_init,
> + .exit = xts_fallback_exit,
> + .min_keysize = 2 * AES_MIN_KEY_SIZE,
> + .max_keysize = 2 * AES_MAX_KEY_SIZE,
> + .ivsize = AES_BLOCK_SIZE,
> + .setkey = xts_aes_set_key,
> + .encrypt = xts_aes_encrypt,
> + .decrypt = xts_aes_decrypt,
> };
>
> -static int ctr_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int ctr_aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> - struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> unsigned long fc;
>
> /* Pick the correct function code based on the key length */
> @@ -674,7 +559,7 @@ static int ctr_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> /* Check if the function code is available */
> sctx->fc = (fc && cpacf_test_func(&kmctr_functions, fc)) ? fc : 0;
> if (!sctx->fc)
> - return setkey_fallback_blk(tfm, in_key, key_len);
> + return setkey_fallback_skcipher(tfm, in_key, key_len);
>
> sctx->key_len = key_len;
> memcpy(sctx->key, in_key, key_len);
> @@ -696,30 +581,34 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
> return n;
> }
>
> -static int ctr_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int ctr_aes_crypt(struct skcipher_request *req)
> {
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_aes_ctx *sctx = crypto_skcipher_ctx(tfm);
> u8 buf[AES_BLOCK_SIZE], *ctrptr;
> + struct skcipher_walk walk;
> unsigned int n, nbytes;
> int ret, locked;
>
> + if (unlikely(!sctx->fc))
> + return fallback_skcipher_crypt(sctx, req, 0);
> +
> locked = mutex_trylock(&ctrblk_lock);
>
> - ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + ret = skcipher_walk_virt(&walk, req, false);
> + while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
> n = AES_BLOCK_SIZE;
> +
> if (nbytes >= 2*AES_BLOCK_SIZE && locked)
> - n = __ctrblk_init(ctrblk, walk->iv, nbytes);
> - ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
> - cpacf_kmctr(sctx->fc | modifier, sctx->key,
> - walk->dst.virt.addr, walk->src.virt.addr,
> - n, ctrptr);
> + n = __ctrblk_init(ctrblk, walk.iv, nbytes);
> + ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
> + cpacf_kmctr(sctx->fc, sctx->key, walk.dst.virt.addr,
> + walk.src.virt.addr, n, ctrptr);
> if (ctrptr == ctrblk)
> - memcpy(walk->iv, ctrptr + n - AES_BLOCK_SIZE,
> + memcpy(walk.iv, ctrptr + n - AES_BLOCK_SIZE,
> AES_BLOCK_SIZE);
> - crypto_inc(walk->iv, AES_BLOCK_SIZE);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + crypto_inc(walk.iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, nbytes - n);
> }
> if (locked)
> mutex_unlock(&ctrblk_lock);
> @@ -727,67 +616,33 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> * final block may be < AES_BLOCK_SIZE, copy only nbytes
> */
> if (nbytes) {
> - cpacf_kmctr(sctx->fc | modifier, sctx->key,
> - buf, walk->src.virt.addr,
> - AES_BLOCK_SIZE, walk->iv);
> - memcpy(walk->dst.virt.addr, buf, nbytes);
> - crypto_inc(walk->iv, AES_BLOCK_SIZE);
> - ret = blkcipher_walk_done(desc, walk, 0);
> + cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
> + AES_BLOCK_SIZE, walk.iv);
> + memcpy(walk.dst.virt.addr, buf, nbytes);
> + crypto_inc(walk.iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, 0);
> }
>
> return ret;
> }
>
> -static int ctr_aes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (unlikely(!sctx->fc))
> - return fallback_blk_enc(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_aes_crypt(desc, 0, &walk);
> -}
> -
> -static int ctr_aes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);
> - struct blkcipher_walk walk;
> -
> - if (unlikely(!sctx->fc))
> - return fallback_blk_dec(desc, dst, src, nbytes);
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_aes_crypt(desc, CPACF_DECRYPT, &walk);
> -}
> -
> -static struct crypto_alg ctr_aes_alg = {
> - .cra_name = "ctr(aes)",
> - .cra_driver_name = "ctr-aes-s390",
> - .cra_priority = 402, /* ecb-aes-s390 + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
> - CRYPTO_ALG_NEED_FALLBACK,
> - .cra_blocksize = 1,
> - .cra_ctxsize = sizeof(struct s390_aes_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_init = fallback_init_blk,
> - .cra_exit = fallback_exit_blk,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = AES_MIN_KEY_SIZE,
> - .max_keysize = AES_MAX_KEY_SIZE,
> - .ivsize = AES_BLOCK_SIZE,
> - .setkey = ctr_aes_set_key,
> - .encrypt = ctr_aes_encrypt,
> - .decrypt = ctr_aes_decrypt,
> - }
> - }
> +static struct skcipher_alg ctr_aes_alg = {
> + .base.cra_name = "ctr(aes)",
> + .base.cra_driver_name = "ctr-aes-s390",
> + .base.cra_priority = 402, /* ecb-aes-s390 + 1 */
> + .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
> + .base.cra_blocksize = 1,
> + .base.cra_ctxsize = sizeof(struct s390_aes_ctx),
> + .base.cra_module = THIS_MODULE,
> + .init = fallback_init_skcipher,
> + .exit = fallback_exit_skcipher,
> + .min_keysize = AES_MIN_KEY_SIZE,
> + .max_keysize = AES_MAX_KEY_SIZE,
> + .ivsize = AES_BLOCK_SIZE,
> + .setkey = ctr_aes_set_key,
> + .encrypt = ctr_aes_crypt,
> + .decrypt = ctr_aes_crypt,
> + .chunksize = AES_BLOCK_SIZE,
> };
>
> static int gcm_aes_setkey(struct crypto_aead *tfm, const u8 *key,
> @@ -1116,24 +971,27 @@ static struct aead_alg gcm_aes_aead = {
> },
> };
>
> -static struct crypto_alg *aes_s390_algs_ptr[5];
> -static int aes_s390_algs_num;
> +static struct crypto_alg *aes_s390_alg;
> +static struct skcipher_alg *aes_s390_skcipher_algs[4];
> +static int aes_s390_skciphers_num;
> static struct aead_alg *aes_s390_aead_alg;
>
> -static int aes_s390_register_alg(struct crypto_alg *alg)
> +static int aes_s390_register_skcipher(struct skcipher_alg *alg)
> {
> int ret;
>
> - ret = crypto_register_alg(alg);
> + ret = crypto_register_skcipher(alg);
> if (!ret)
> - aes_s390_algs_ptr[aes_s390_algs_num++] = alg;
> + aes_s390_skcipher_algs[aes_s390_skciphers_num++] = alg;
> return ret;
> }
>
> static void aes_s390_fini(void)
> {
> - while (aes_s390_algs_num--)
> - crypto_unregister_alg(aes_s390_algs_ptr[aes_s390_algs_num]);
> + if (aes_s390_alg)
> + crypto_unregister_alg(aes_s390_alg);
> + while (aes_s390_skciphers_num--)
> + crypto_unregister_skcipher(aes_s390_skcipher_algs[aes_s390_skciphers_num]);
> if (ctrblk)
> free_page((unsigned long) ctrblk);
>
> @@ -1154,10 +1012,11 @@ static int __init aes_s390_init(void)
> if (cpacf_test_func(&km_functions, CPACF_KM_AES_128) ||
> cpacf_test_func(&km_functions, CPACF_KM_AES_192) ||
> cpacf_test_func(&km_functions, CPACF_KM_AES_256)) {
> - ret = aes_s390_register_alg(&aes_alg);
> + ret = crypto_register_alg(&aes_alg);
> if (ret)
> goto out_err;
> - ret = aes_s390_register_alg(&ecb_aes_alg);
> + aes_s390_alg = &aes_alg;
> + ret = aes_s390_register_skcipher(&ecb_aes_alg);
> if (ret)
> goto out_err;
> }
> @@ -1165,14 +1024,14 @@ static int __init aes_s390_init(void)
> if (cpacf_test_func(&kmc_functions, CPACF_KMC_AES_128) ||
> cpacf_test_func(&kmc_functions, CPACF_KMC_AES_192) ||
> cpacf_test_func(&kmc_functions, CPACF_KMC_AES_256)) {
> - ret = aes_s390_register_alg(&cbc_aes_alg);
> + ret = aes_s390_register_skcipher(&cbc_aes_alg);
> if (ret)
> goto out_err;
> }
>
> if (cpacf_test_func(&km_functions, CPACF_KM_XTS_128) ||
> cpacf_test_func(&km_functions, CPACF_KM_XTS_256)) {
> - ret = aes_s390_register_alg(&xts_aes_alg);
> + ret = aes_s390_register_skcipher(&xts_aes_alg);
> if (ret)
> goto out_err;
> }
> @@ -1185,7 +1044,7 @@ static int __init aes_s390_init(void)
> ret = -ENOMEM;
> goto out_err;
> }
> - ret = aes_s390_register_alg(&ctr_aes_alg);
> + ret = aes_s390_register_skcipher(&ctr_aes_alg);
> if (ret)
> goto out_err;
> }

Tested with extended selftests and own tests via AF_ALG interface, works.
Thanks for this great work.
Reviewed-by: Harald Freudenberger <[email protected]>

2019-10-15 12:13:28

by Harald Freudenberger

[permalink] [raw]
Subject: Re: [RFT PATCH 2/3] crypto: s390/paes - convert to skcipher API

On 12.10.19 22:18, Eric Biggers wrote:
> From: Eric Biggers <[email protected]>
>
> Convert the glue code for the S390 CPACF protected key implementations
> of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated
> "blkcipher" API to the "skcipher" API. This is needed in order for the
> blkcipher API to be removed.
>
> Note: I made CTR use the same function for encryption and decryption,
> since CTR encryption and decryption are identical.
>
> Signed-off-by: Eric Biggers <[email protected]>
> ---
> arch/s390/crypto/paes_s390.c | 414 +++++++++++++++--------------------
> 1 file changed, 174 insertions(+), 240 deletions(-)
>
> diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
> index 6184dceed340..c7119c617b6e 100644
> --- a/arch/s390/crypto/paes_s390.c
> +++ b/arch/s390/crypto/paes_s390.c
> @@ -21,6 +21,7 @@
> #include <linux/cpufeature.h>
> #include <linux/init.h>
> #include <linux/spinlock.h>
> +#include <crypto/internal/skcipher.h>
> #include <crypto/xts.h>
> #include <asm/cpacf.h>
> #include <asm/pkey.h>
> @@ -123,27 +124,27 @@ static int __paes_set_key(struct s390_paes_ctx *ctx)
> return ctx->fc ? 0 : -EINVAL;
> }
>
> -static int ecb_paes_init(struct crypto_tfm *tfm)
> +static int ecb_paes_init(struct crypto_skcipher *tfm)
> {
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> ctx->kb.key = NULL;
>
> return 0;
> }
>
> -static void ecb_paes_exit(struct crypto_tfm *tfm)
> +static void ecb_paes_exit(struct crypto_skcipher *tfm)
> {
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb);
> }
>
> -static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int ecb_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> int rc;
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb);
> rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
> @@ -151,91 +152,75 @@ static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> return rc;
>
> if (__paes_set_key(ctx)) {
> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> return -EINVAL;
> }
> return 0;
> }
>
> -static int ecb_paes_crypt(struct blkcipher_desc *desc,
> - unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int ecb_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int nbytes, n, k;
> int ret;
>
> - ret = blkcipher_walk_virt(desc, walk);
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + ret = skcipher_walk_virt(&walk, req, false);
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(AES_BLOCK_SIZE - 1);
> k = cpacf_km(ctx->fc | modifier, ctx->pk.protkey,
> - walk->dst.virt.addr, walk->src.virt.addr, n);
> + walk.dst.virt.addr, walk.src.virt.addr, n);
> if (k)
> - ret = blkcipher_walk_done(desc, walk, nbytes - k);
> + ret = skcipher_walk_done(&walk, nbytes - k);
> if (k < n) {
> if (__paes_set_key(ctx) != 0)
> - return blkcipher_walk_done(desc, walk, -EIO);
> + return skcipher_walk_done(&walk, -EIO);
> }
> }
> return ret;
> }
>
> -static int ecb_paes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_paes_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_paes_crypt(desc, CPACF_ENCRYPT, &walk);
> + return ecb_paes_crypt(req, 0);
> }
>
> -static int ecb_paes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int ecb_paes_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ecb_paes_crypt(desc, CPACF_DECRYPT, &walk);
> + return ecb_paes_crypt(req, CPACF_DECRYPT);
> }
>
> -static struct crypto_alg ecb_paes_alg = {
> - .cra_name = "ecb(paes)",
> - .cra_driver_name = "ecb-paes-s390",
> - .cra_priority = 401, /* combo: aes + ecb + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = AES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_paes_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list),
> - .cra_init = ecb_paes_init,
> - .cra_exit = ecb_paes_exit,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = PAES_MIN_KEYSIZE,
> - .max_keysize = PAES_MAX_KEYSIZE,
> - .setkey = ecb_paes_set_key,
> - .encrypt = ecb_paes_encrypt,
> - .decrypt = ecb_paes_decrypt,
> - }
> - }
> +static struct skcipher_alg ecb_paes_alg = {
> + .base.cra_name = "ecb(paes)",
> + .base.cra_driver_name = "ecb-paes-s390",
> + .base.cra_priority = 401, /* combo: aes + ecb + 1 */
> + .base.cra_blocksize = AES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
> + .base.cra_module = THIS_MODULE,
> + .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
> + .init = ecb_paes_init,
> + .exit = ecb_paes_exit,
> + .min_keysize = PAES_MIN_KEYSIZE,
> + .max_keysize = PAES_MAX_KEYSIZE,
> + .setkey = ecb_paes_set_key,
> + .encrypt = ecb_paes_encrypt,
> + .decrypt = ecb_paes_decrypt,
> };
>
> -static int cbc_paes_init(struct crypto_tfm *tfm)
> +static int cbc_paes_init(struct crypto_skcipher *tfm)
> {
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> ctx->kb.key = NULL;
>
> return 0;
> }
>
> -static void cbc_paes_exit(struct crypto_tfm *tfm)
> +static void cbc_paes_exit(struct crypto_skcipher *tfm)
> {
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb);
> }
> @@ -258,11 +243,11 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
> return ctx->fc ? 0 : -EINVAL;
> }
>
> -static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int cbc_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> int rc;
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb);
> rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
> @@ -270,16 +255,17 @@ static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> return rc;
>
> if (__cbc_paes_set_key(ctx)) {
> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> return -EINVAL;
> }
> return 0;
> }
>
> -static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int cbc_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int nbytes, n, k;
> int ret;
> struct {
> @@ -287,73 +273,60 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> u8 key[MAXPROTKEYSIZE];
> } param;
>
> - ret = blkcipher_walk_virt(desc, walk);
> - memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_virt(&walk, req, false);
> + if (ret)
> + return ret;
> + memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
> memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(AES_BLOCK_SIZE - 1);
> k = cpacf_kmc(ctx->fc | modifier, &param,
> - walk->dst.virt.addr, walk->src.virt.addr, n);
> - if (k)
> - ret = blkcipher_walk_done(desc, walk, nbytes - k);
> + walk.dst.virt.addr, walk.src.virt.addr, n);
> + if (k) {
> + memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, nbytes - k);
> + }
> if (k < n) {
> if (__cbc_paes_set_key(ctx) != 0)
> - return blkcipher_walk_done(desc, walk, -EIO);
> + return skcipher_walk_done(&walk, -EIO);
> memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
> }
> }
> - memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
> return ret;
> }
>
> -static int cbc_paes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_paes_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_paes_crypt(desc, 0, &walk);
> + return cbc_paes_crypt(req, 0);
> }
>
> -static int cbc_paes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int cbc_paes_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return cbc_paes_crypt(desc, CPACF_DECRYPT, &walk);
> + return cbc_paes_crypt(req, CPACF_DECRYPT);
> }
>
> -static struct crypto_alg cbc_paes_alg = {
> - .cra_name = "cbc(paes)",
> - .cra_driver_name = "cbc-paes-s390",
> - .cra_priority = 402, /* ecb-paes-s390 + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = AES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_paes_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list),
> - .cra_init = cbc_paes_init,
> - .cra_exit = cbc_paes_exit,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = PAES_MIN_KEYSIZE,
> - .max_keysize = PAES_MAX_KEYSIZE,
> - .ivsize = AES_BLOCK_SIZE,
> - .setkey = cbc_paes_set_key,
> - .encrypt = cbc_paes_encrypt,
> - .decrypt = cbc_paes_decrypt,
> - }
> - }
> +static struct skcipher_alg cbc_paes_alg = {
> + .base.cra_name = "cbc(paes)",
> + .base.cra_driver_name = "cbc-paes-s390",
> + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
> + .base.cra_blocksize = AES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
> + .base.cra_module = THIS_MODULE,
> + .base.cra_list = LIST_HEAD_INIT(cbc_paes_alg.base.cra_list),
> + .init = cbc_paes_init,
> + .exit = cbc_paes_exit,
> + .min_keysize = PAES_MIN_KEYSIZE,
> + .max_keysize = PAES_MAX_KEYSIZE,
> + .ivsize = AES_BLOCK_SIZE,
> + .setkey = cbc_paes_set_key,
> + .encrypt = cbc_paes_encrypt,
> + .decrypt = cbc_paes_decrypt,
> };
>
> -static int xts_paes_init(struct crypto_tfm *tfm)
> +static int xts_paes_init(struct crypto_skcipher *tfm)
> {
> - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> ctx->kb[0].key = NULL;
> ctx->kb[1].key = NULL;
> @@ -361,9 +334,9 @@ static int xts_paes_init(struct crypto_tfm *tfm)
> return 0;
> }
>
> -static void xts_paes_exit(struct crypto_tfm *tfm)
> +static void xts_paes_exit(struct crypto_skcipher *tfm)
> {
> - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb[0]);
> _free_kb_keybuf(&ctx->kb[1]);
> @@ -391,11 +364,11 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
> return ctx->fc ? 0 : -EINVAL;
> }
>
> -static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int xts_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int xts_key_len)
> {
> int rc;
> - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
> u8 ckey[2 * AES_MAX_KEY_SIZE];
> unsigned int ckey_len, key_len;
>
> @@ -414,7 +387,7 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> return rc;
>
> if (__xts_paes_set_key(ctx)) {
> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> return -EINVAL;
> }
>
> @@ -427,13 +400,14 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> AES_KEYSIZE_128 : AES_KEYSIZE_256;
> memcpy(ckey, ctx->pk[0].protkey, ckey_len);
> memcpy(ckey + ckey_len, ctx->pk[1].protkey, ckey_len);
> - return xts_check_key(tfm, ckey, 2*ckey_len);
> + return xts_verify_key(tfm, ckey, 2*ckey_len);
> }
>
> -static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int xts_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> - struct s390_pxts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk walk;
> unsigned int keylen, offset, nbytes, n, k;
> int ret;
> struct {
> @@ -448,90 +422,76 @@ static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> u8 init[16];
> } xts_param;
>
> - ret = blkcipher_walk_virt(desc, walk);
> + ret = skcipher_walk_virt(&walk, req, false);
> + if (ret)
> + return ret;
> keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 48 : 64;
> offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 16 : 0;
> retry:
> memset(&pcc_param, 0, sizeof(pcc_param));
> - memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
> + memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
> memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
> cpacf_pcc(ctx->fc, pcc_param.key + offset);
>
> memcpy(xts_param.key + offset, ctx->pk[0].protkey, keylen);
> memcpy(xts_param.init, pcc_param.xts, 16);
>
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + while ((nbytes = walk.nbytes) != 0) {
> /* only use complete blocks */
> n = nbytes & ~(AES_BLOCK_SIZE - 1);
> k = cpacf_km(ctx->fc | modifier, xts_param.key + offset,
> - walk->dst.virt.addr, walk->src.virt.addr, n);
> + walk.dst.virt.addr, walk.src.virt.addr, n);
> if (k)
> - ret = blkcipher_walk_done(desc, walk, nbytes - k);
> + ret = skcipher_walk_done(&walk, nbytes - k);
> if (k < n) {
> if (__xts_paes_set_key(ctx) != 0)
> - return blkcipher_walk_done(desc, walk, -EIO);
> + return skcipher_walk_done(&walk, -EIO);
> goto retry;
> }
> }
> return ret;
> }
>
> -static int xts_paes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int xts_paes_encrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return xts_paes_crypt(desc, 0, &walk);
> + return xts_paes_crypt(req, 0);
> }
>
> -static int xts_paes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> +static int xts_paes_decrypt(struct skcipher_request *req)
> {
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return xts_paes_crypt(desc, CPACF_DECRYPT, &walk);
> + return xts_paes_crypt(req, CPACF_DECRYPT);
> }
>
> -static struct crypto_alg xts_paes_alg = {
> - .cra_name = "xts(paes)",
> - .cra_driver_name = "xts-paes-s390",
> - .cra_priority = 402, /* ecb-paes-s390 + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = AES_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct s390_pxts_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list),
> - .cra_init = xts_paes_init,
> - .cra_exit = xts_paes_exit,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = 2 * PAES_MIN_KEYSIZE,
> - .max_keysize = 2 * PAES_MAX_KEYSIZE,
> - .ivsize = AES_BLOCK_SIZE,
> - .setkey = xts_paes_set_key,
> - .encrypt = xts_paes_encrypt,
> - .decrypt = xts_paes_decrypt,
> - }
> - }
> +static struct skcipher_alg xts_paes_alg = {
> + .base.cra_name = "xts(paes)",
> + .base.cra_driver_name = "xts-paes-s390",
> + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
> + .base.cra_blocksize = AES_BLOCK_SIZE,
> + .base.cra_ctxsize = sizeof(struct s390_pxts_ctx),
> + .base.cra_module = THIS_MODULE,
> + .base.cra_list = LIST_HEAD_INIT(xts_paes_alg.base.cra_list),
> + .init = xts_paes_init,
> + .exit = xts_paes_exit,
> + .min_keysize = 2 * PAES_MIN_KEYSIZE,
> + .max_keysize = 2 * PAES_MAX_KEYSIZE,
> + .ivsize = AES_BLOCK_SIZE,
> + .setkey = xts_paes_set_key,
> + .encrypt = xts_paes_encrypt,
> + .decrypt = xts_paes_decrypt,
> };
>
> -static int ctr_paes_init(struct crypto_tfm *tfm)
> +static int ctr_paes_init(struct crypto_skcipher *tfm)
> {
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> ctx->kb.key = NULL;
>
> return 0;
> }
>
> -static void ctr_paes_exit(struct crypto_tfm *tfm)
> +static void ctr_paes_exit(struct crypto_skcipher *tfm)
> {
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb);
> }
> @@ -555,11 +515,11 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
> return ctx->fc ? 0 : -EINVAL;
> }
>
> -static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> +static int ctr_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> unsigned int key_len)
> {
> int rc;
> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> _free_kb_keybuf(&ctx->kb);
> rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
> @@ -567,7 +527,7 @@ static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> return rc;
>
> if (__ctr_paes_set_key(ctx)) {
> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> return -EINVAL;
> }
> return 0;
> @@ -588,37 +548,37 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
> return n;
> }
>
> -static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> - struct blkcipher_walk *walk)
> +static int ctr_paes_crypt(struct skcipher_request *req)
> {
> - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> u8 buf[AES_BLOCK_SIZE], *ctrptr;
> + struct skcipher_walk walk;
> unsigned int nbytes, n, k;
> int ret, locked;
>
> locked = spin_trylock(&ctrblk_lock);
>
> - ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> + ret = skcipher_walk_virt(&walk, req, false);
> + while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
> n = AES_BLOCK_SIZE;
> if (nbytes >= 2*AES_BLOCK_SIZE && locked)
> - n = __ctrblk_init(ctrblk, walk->iv, nbytes);
> - ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
> - k = cpacf_kmctr(ctx->fc | modifier, ctx->pk.protkey,
> - walk->dst.virt.addr, walk->src.virt.addr,
> - n, ctrptr);
> + n = __ctrblk_init(ctrblk, walk.iv, nbytes);
> + ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
> + k = cpacf_kmctr(ctx->fc, ctx->pk.protkey, walk.dst.virt.addr,
> + walk.src.virt.addr, n, ctrptr);
> if (k) {
> if (ctrptr == ctrblk)
> - memcpy(walk->iv, ctrptr + k - AES_BLOCK_SIZE,
> + memcpy(walk.iv, ctrptr + k - AES_BLOCK_SIZE,
> AES_BLOCK_SIZE);
> - crypto_inc(walk->iv, AES_BLOCK_SIZE);
> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> + crypto_inc(walk.iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, nbytes - n);

Looks like a bug here. It should be

ret = skcipher_walk_done(&walk, nbytes - k);

similar to the other modes.
You can add this in your patch or leave it to me to provide a separate patch.

> }
> if (k < n) {
> if (__ctr_paes_set_key(ctx) != 0) {
> if (locked)
> spin_unlock(&ctrblk_lock);
> - return blkcipher_walk_done(desc, walk, -EIO);
> + return skcipher_walk_done(&walk, -EIO);
> }
> }
> }
> @@ -629,80 +589,54 @@ static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> */
> if (nbytes) {
> while (1) {
> - if (cpacf_kmctr(ctx->fc | modifier,
> - ctx->pk.protkey, buf,
> - walk->src.virt.addr, AES_BLOCK_SIZE,
> - walk->iv) == AES_BLOCK_SIZE)
> + if (cpacf_kmctr(ctx->fc, ctx->pk.protkey, buf,
> + walk.src.virt.addr, AES_BLOCK_SIZE,
> + walk.iv) == AES_BLOCK_SIZE)
> break;
> if (__ctr_paes_set_key(ctx) != 0)
> - return blkcipher_walk_done(desc, walk, -EIO);
> + return skcipher_walk_done(&walk, -EIO);
> }
> - memcpy(walk->dst.virt.addr, buf, nbytes);
> - crypto_inc(walk->iv, AES_BLOCK_SIZE);
> - ret = blkcipher_walk_done(desc, walk, 0);
> + memcpy(walk.dst.virt.addr, buf, nbytes);
> + crypto_inc(walk.iv, AES_BLOCK_SIZE);
> + ret = skcipher_walk_done(&walk, 0);
> }
>
> return ret;
> }
>
> -static int ctr_paes_encrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_paes_crypt(desc, 0, &walk);
> -}
> -
> -static int ctr_paes_decrypt(struct blkcipher_desc *desc,
> - struct scatterlist *dst, struct scatterlist *src,
> - unsigned int nbytes)
> -{
> - struct blkcipher_walk walk;
> -
> - blkcipher_walk_init(&walk, dst, src, nbytes);
> - return ctr_paes_crypt(desc, CPACF_DECRYPT, &walk);
> -}
> -
> -static struct crypto_alg ctr_paes_alg = {
> - .cra_name = "ctr(paes)",
> - .cra_driver_name = "ctr-paes-s390",
> - .cra_priority = 402, /* ecb-paes-s390 + 1 */
> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> - .cra_blocksize = 1,
> - .cra_ctxsize = sizeof(struct s390_paes_ctx),
> - .cra_type = &crypto_blkcipher_type,
> - .cra_module = THIS_MODULE,
> - .cra_list = LIST_HEAD_INIT(ctr_paes_alg.cra_list),
> - .cra_init = ctr_paes_init,
> - .cra_exit = ctr_paes_exit,
> - .cra_u = {
> - .blkcipher = {
> - .min_keysize = PAES_MIN_KEYSIZE,
> - .max_keysize = PAES_MAX_KEYSIZE,
> - .ivsize = AES_BLOCK_SIZE,
> - .setkey = ctr_paes_set_key,
> - .encrypt = ctr_paes_encrypt,
> - .decrypt = ctr_paes_decrypt,
> - }
> - }
> +static struct skcipher_alg ctr_paes_alg = {
> + .base.cra_name = "ctr(paes)",
> + .base.cra_driver_name = "ctr-paes-s390",
> + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
> + .base.cra_blocksize = 1,
> + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
> + .base.cra_module = THIS_MODULE,
> + .base.cra_list = LIST_HEAD_INIT(ctr_paes_alg.base.cra_list),
> + .init = ctr_paes_init,
> + .exit = ctr_paes_exit,
> + .min_keysize = PAES_MIN_KEYSIZE,
> + .max_keysize = PAES_MAX_KEYSIZE,
> + .ivsize = AES_BLOCK_SIZE,
> + .setkey = ctr_paes_set_key,
> + .encrypt = ctr_paes_crypt,
> + .decrypt = ctr_paes_crypt,
> + .chunksize = AES_BLOCK_SIZE,
> };
>
> -static inline void __crypto_unregister_alg(struct crypto_alg *alg)
> +static inline void __crypto_unregister_skcipher(struct skcipher_alg *alg)
> {
> - if (!list_empty(&alg->cra_list))
> - crypto_unregister_alg(alg);
> + if (!list_empty(&alg->base.cra_list))
> + crypto_unregister_skcipher(alg);
> }
>
> static void paes_s390_fini(void)
> {
> if (ctrblk)
> free_page((unsigned long) ctrblk);
> - __crypto_unregister_alg(&ctr_paes_alg);
> - __crypto_unregister_alg(&xts_paes_alg);
> - __crypto_unregister_alg(&cbc_paes_alg);
> - __crypto_unregister_alg(&ecb_paes_alg);
> + __crypto_unregister_skcipher(&ctr_paes_alg);
> + __crypto_unregister_skcipher(&xts_paes_alg);
> + __crypto_unregister_skcipher(&cbc_paes_alg);
> + __crypto_unregister_skcipher(&ecb_paes_alg);
> }
>
> static int __init paes_s390_init(void)
> @@ -717,7 +651,7 @@ static int __init paes_s390_init(void)
> if (cpacf_test_func(&km_functions, CPACF_KM_PAES_128) ||
> cpacf_test_func(&km_functions, CPACF_KM_PAES_192) ||
> cpacf_test_func(&km_functions, CPACF_KM_PAES_256)) {
> - ret = crypto_register_alg(&ecb_paes_alg);
> + ret = crypto_register_skcipher(&ecb_paes_alg);
> if (ret)
> goto out_err;
> }
> @@ -725,14 +659,14 @@ static int __init paes_s390_init(void)
> if (cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_128) ||
> cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_192) ||
> cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_256)) {
> - ret = crypto_register_alg(&cbc_paes_alg);
> + ret = crypto_register_skcipher(&cbc_paes_alg);
> if (ret)
> goto out_err;
> }
>
> if (cpacf_test_func(&km_functions, CPACF_KM_PXTS_128) ||
> cpacf_test_func(&km_functions, CPACF_KM_PXTS_256)) {
> - ret = crypto_register_alg(&xts_paes_alg);
> + ret = crypto_register_skcipher(&xts_paes_alg);
> if (ret)
> goto out_err;
> }
> @@ -740,7 +674,7 @@ static int __init paes_s390_init(void)
> if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_128) ||
> cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_192) ||
> cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_256)) {
> - ret = crypto_register_alg(&ctr_paes_alg);
> + ret = crypto_register_skcipher(&ctr_paes_alg);
> if (ret)
> goto out_err;
> ctrblk = (u8 *) __get_free_page(GFP_KERNEL);
Tested with extended selftests and own tests via AF_ALG interface, works.
Thanks for this great work.
reviewed-by: Harald Freudenberger <[email protected]>

2019-10-16 12:46:26

by Heiko Carstens

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On Sat, Oct 12, 2019 at 01:18:06PM -0700, Eric Biggers wrote:
> This series converts the glue code for the S390 CPACF implementations of
> AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
> "skcipher" API. This is needed in order for the blkcipher API to be
> removed.
>
> I've compiled this patchset, and the conversion is very similar to that
> which has been done for many other crypto drivers. But I don't have the
> hardware to test it, nor is S390 CPACF supported by QEMU. So I really
> need someone with the hardware to test it. You can do so by setting:

...

> Eric Biggers (3):
> crypto: s390/aes - convert to skcipher API
> crypto: s390/paes - convert to skcipher API
> crypto: s390/des - convert to skcipher API
>
> arch/s390/crypto/aes_s390.c | 609 ++++++++++++++---------------------
> arch/s390/crypto/des_s390.c | 419 ++++++++++--------------
> arch/s390/crypto/paes_s390.c | 414 ++++++++++--------------
> 3 files changed, 580 insertions(+), 862 deletions(-)

Herbert, should these go upstream via the s390 or crypto tree?

2019-10-16 13:26:04

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On Sat, Oct 12, 2019 at 11:20 PM Eric Biggers <[email protected]> wrote:
>
> This series converts the glue code for the S390 CPACF implementations of
> AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
> "skcipher" API. This is needed in order for the blkcipher API to be
> removed.
>
> I've compiled this patchset, and the conversion is very similar to that
> which has been done for many other crypto drivers. But I don't have the
> hardware to test it, nor is S390 CPACF supported by QEMU. So I really
> need someone with the hardware to test it. You can do so by setting:
>
> CONFIG_CRYPTO_HW=y
> CONFIG_ZCRYPT=y
> CONFIG_PKEY=y
> CONFIG_CRYPTO_AES_S390=y
> CONFIG_CRYPTO_PAES_S390=y
> CONFIG_CRYPTO_DES_S390=y
> # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
> CONFIG_DEBUG_KERNEL=y
> CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
> CONFIG_CRYPTO_AES=y
> CONFIG_CRYPTO_DES=y
> CONFIG_CRYPTO_CBC=y
> CONFIG_CRYPTO_CTR=y
> CONFIG_CRYPTO_ECB=y
> CONFIG_CRYPTO_XTS=y
>
> Then boot and check for crypto self-test failures by running
> 'dmesg | grep alg'.
>
> If there are test failures, please also check whether they were already
> failing prior to this patchset.
>
> This won't cover the "paes" ("protected key AES") algorithms, however,
> since those don't have self-tests. If anyone has any way to test those,
> please do so.



It is probably impracticable to test paes algorithms since they rely
on keys which are not accessible to the kernel and are typically tied
to the specific machine you run on.

Gilad

--
Gilad Ben-Yossef
Chief Coffee Drinker

values of β will give rise to dom!

2019-10-16 15:37:39

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On Wed, Oct 16, 2019 at 10:01:03AM +0200, Heiko Carstens wrote:
> On Sat, Oct 12, 2019 at 01:18:06PM -0700, Eric Biggers wrote:
> > This series converts the glue code for the S390 CPACF implementations of
> > AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
> > "skcipher" API. This is needed in order for the blkcipher API to be
> > removed.
> >
> > I've compiled this patchset, and the conversion is very similar to that
> > which has been done for many other crypto drivers. But I don't have the
> > hardware to test it, nor is S390 CPACF supported by QEMU. So I really
> > need someone with the hardware to test it. You can do so by setting:
>
> ...
>
> > Eric Biggers (3):
> > crypto: s390/aes - convert to skcipher API
> > crypto: s390/paes - convert to skcipher API
> > crypto: s390/des - convert to skcipher API
> >
> > arch/s390/crypto/aes_s390.c | 609 ++++++++++++++---------------------
> > arch/s390/crypto/des_s390.c | 419 ++++++++++--------------
> > arch/s390/crypto/paes_s390.c | 414 ++++++++++--------------
> > 3 files changed, 580 insertions(+), 862 deletions(-)
>
> Herbert, should these go upstream via the s390 or crypto tree?

It would be best to go via the crypto tree since any future patches
to remove blkcipher/ablkcipher would depend on these patches.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2019-10-17 12:43:33

by Eric Biggers

[permalink] [raw]
Subject: Re: [RFT PATCH 2/3] crypto: s390/paes - convert to skcipher API

On Tue, Oct 15, 2019 at 01:31:39PM +0200, Harald Freudenberger wrote:
> On 12.10.19 22:18, Eric Biggers wrote:
> > From: Eric Biggers <[email protected]>
> >
> > Convert the glue code for the S390 CPACF protected key implementations
> > of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated
> > "blkcipher" API to the "skcipher" API. This is needed in order for the
> > blkcipher API to be removed.
> >
> > Note: I made CTR use the same function for encryption and decryption,
> > since CTR encryption and decryption are identical.
> >
> > Signed-off-by: Eric Biggers <[email protected]>
> > ---
> > arch/s390/crypto/paes_s390.c | 414 +++++++++++++++--------------------
> > 1 file changed, 174 insertions(+), 240 deletions(-)
> >
> > diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
> > index 6184dceed340..c7119c617b6e 100644
> > --- a/arch/s390/crypto/paes_s390.c
> > +++ b/arch/s390/crypto/paes_s390.c
> > @@ -21,6 +21,7 @@
> > #include <linux/cpufeature.h>
> > #include <linux/init.h>
> > #include <linux/spinlock.h>
> > +#include <crypto/internal/skcipher.h>
> > #include <crypto/xts.h>
> > #include <asm/cpacf.h>
> > #include <asm/pkey.h>
> > @@ -123,27 +124,27 @@ static int __paes_set_key(struct s390_paes_ctx *ctx)
> > return ctx->fc ? 0 : -EINVAL;
> > }
> >
> > -static int ecb_paes_init(struct crypto_tfm *tfm)
> > +static int ecb_paes_init(struct crypto_skcipher *tfm)
> > {
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > ctx->kb.key = NULL;
> >
> > return 0;
> > }
> >
> > -static void ecb_paes_exit(struct crypto_tfm *tfm)
> > +static void ecb_paes_exit(struct crypto_skcipher *tfm)
> > {
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb);
> > }
> >
> > -static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > +static int ecb_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> > unsigned int key_len)
> > {
> > int rc;
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb);
> > rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
> > @@ -151,91 +152,75 @@ static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > return rc;
> >
> > if (__paes_set_key(ctx)) {
> > - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> > + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> > return -EINVAL;
> > }
> > return 0;
> > }
> >
> > -static int ecb_paes_crypt(struct blkcipher_desc *desc,
> > - unsigned long modifier,
> > - struct blkcipher_walk *walk)
> > +static int ecb_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> > {
> > - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> > + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> > + struct skcipher_walk walk;
> > unsigned int nbytes, n, k;
> > int ret;
> >
> > - ret = blkcipher_walk_virt(desc, walk);
> > - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> > + ret = skcipher_walk_virt(&walk, req, false);
> > + while ((nbytes = walk.nbytes) != 0) {
> > /* only use complete blocks */
> > n = nbytes & ~(AES_BLOCK_SIZE - 1);
> > k = cpacf_km(ctx->fc | modifier, ctx->pk.protkey,
> > - walk->dst.virt.addr, walk->src.virt.addr, n);
> > + walk.dst.virt.addr, walk.src.virt.addr, n);
> > if (k)
> > - ret = blkcipher_walk_done(desc, walk, nbytes - k);
> > + ret = skcipher_walk_done(&walk, nbytes - k);
> > if (k < n) {
> > if (__paes_set_key(ctx) != 0)
> > - return blkcipher_walk_done(desc, walk, -EIO);
> > + return skcipher_walk_done(&walk, -EIO);
> > }
> > }
> > return ret;
> > }
> >
> > -static int ecb_paes_encrypt(struct blkcipher_desc *desc,
> > - struct scatterlist *dst, struct scatterlist *src,
> > - unsigned int nbytes)
> > +static int ecb_paes_encrypt(struct skcipher_request *req)
> > {
> > - struct blkcipher_walk walk;
> > -
> > - blkcipher_walk_init(&walk, dst, src, nbytes);
> > - return ecb_paes_crypt(desc, CPACF_ENCRYPT, &walk);
> > + return ecb_paes_crypt(req, 0);
> > }
> >
> > -static int ecb_paes_decrypt(struct blkcipher_desc *desc,
> > - struct scatterlist *dst, struct scatterlist *src,
> > - unsigned int nbytes)
> > +static int ecb_paes_decrypt(struct skcipher_request *req)
> > {
> > - struct blkcipher_walk walk;
> > -
> > - blkcipher_walk_init(&walk, dst, src, nbytes);
> > - return ecb_paes_crypt(desc, CPACF_DECRYPT, &walk);
> > + return ecb_paes_crypt(req, CPACF_DECRYPT);
> > }
> >
> > -static struct crypto_alg ecb_paes_alg = {
> > - .cra_name = "ecb(paes)",
> > - .cra_driver_name = "ecb-paes-s390",
> > - .cra_priority = 401, /* combo: aes + ecb + 1 */
> > - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> > - .cra_blocksize = AES_BLOCK_SIZE,
> > - .cra_ctxsize = sizeof(struct s390_paes_ctx),
> > - .cra_type = &crypto_blkcipher_type,
> > - .cra_module = THIS_MODULE,
> > - .cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list),
> > - .cra_init = ecb_paes_init,
> > - .cra_exit = ecb_paes_exit,
> > - .cra_u = {
> > - .blkcipher = {
> > - .min_keysize = PAES_MIN_KEYSIZE,
> > - .max_keysize = PAES_MAX_KEYSIZE,
> > - .setkey = ecb_paes_set_key,
> > - .encrypt = ecb_paes_encrypt,
> > - .decrypt = ecb_paes_decrypt,
> > - }
> > - }
> > +static struct skcipher_alg ecb_paes_alg = {
> > + .base.cra_name = "ecb(paes)",
> > + .base.cra_driver_name = "ecb-paes-s390",
> > + .base.cra_priority = 401, /* combo: aes + ecb + 1 */
> > + .base.cra_blocksize = AES_BLOCK_SIZE,
> > + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
> > + .base.cra_module = THIS_MODULE,
> > + .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
> > + .init = ecb_paes_init,
> > + .exit = ecb_paes_exit,
> > + .min_keysize = PAES_MIN_KEYSIZE,
> > + .max_keysize = PAES_MAX_KEYSIZE,
> > + .setkey = ecb_paes_set_key,
> > + .encrypt = ecb_paes_encrypt,
> > + .decrypt = ecb_paes_decrypt,
> > };
> >
> > -static int cbc_paes_init(struct crypto_tfm *tfm)
> > +static int cbc_paes_init(struct crypto_skcipher *tfm)
> > {
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > ctx->kb.key = NULL;
> >
> > return 0;
> > }
> >
> > -static void cbc_paes_exit(struct crypto_tfm *tfm)
> > +static void cbc_paes_exit(struct crypto_skcipher *tfm)
> > {
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb);
> > }
> > @@ -258,11 +243,11 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
> > return ctx->fc ? 0 : -EINVAL;
> > }
> >
> > -static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > +static int cbc_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> > unsigned int key_len)
> > {
> > int rc;
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb);
> > rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
> > @@ -270,16 +255,17 @@ static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > return rc;
> >
> > if (__cbc_paes_set_key(ctx)) {
> > - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> > + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> > return -EINVAL;
> > }
> > return 0;
> > }
> >
> > -static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> > - struct blkcipher_walk *walk)
> > +static int cbc_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> > {
> > - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> > + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> > + struct skcipher_walk walk;
> > unsigned int nbytes, n, k;
> > int ret;
> > struct {
> > @@ -287,73 +273,60 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> > u8 key[MAXPROTKEYSIZE];
> > } param;
> >
> > - ret = blkcipher_walk_virt(desc, walk);
> > - memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
> > + ret = skcipher_walk_virt(&walk, req, false);
> > + if (ret)
> > + return ret;
> > + memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
> > memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
> > - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> > + while ((nbytes = walk.nbytes) != 0) {
> > /* only use complete blocks */
> > n = nbytes & ~(AES_BLOCK_SIZE - 1);
> > k = cpacf_kmc(ctx->fc | modifier, &param,
> > - walk->dst.virt.addr, walk->src.virt.addr, n);
> > - if (k)
> > - ret = blkcipher_walk_done(desc, walk, nbytes - k);
> > + walk.dst.virt.addr, walk.src.virt.addr, n);
> > + if (k) {
> > + memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
> > + ret = skcipher_walk_done(&walk, nbytes - k);
> > + }
> > if (k < n) {
> > if (__cbc_paes_set_key(ctx) != 0)
> > - return blkcipher_walk_done(desc, walk, -EIO);
> > + return skcipher_walk_done(&walk, -EIO);
> > memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
> > }
> > }
> > - memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
> > return ret;
> > }
> >
> > -static int cbc_paes_encrypt(struct blkcipher_desc *desc,
> > - struct scatterlist *dst, struct scatterlist *src,
> > - unsigned int nbytes)
> > +static int cbc_paes_encrypt(struct skcipher_request *req)
> > {
> > - struct blkcipher_walk walk;
> > -
> > - blkcipher_walk_init(&walk, dst, src, nbytes);
> > - return cbc_paes_crypt(desc, 0, &walk);
> > + return cbc_paes_crypt(req, 0);
> > }
> >
> > -static int cbc_paes_decrypt(struct blkcipher_desc *desc,
> > - struct scatterlist *dst, struct scatterlist *src,
> > - unsigned int nbytes)
> > +static int cbc_paes_decrypt(struct skcipher_request *req)
> > {
> > - struct blkcipher_walk walk;
> > -
> > - blkcipher_walk_init(&walk, dst, src, nbytes);
> > - return cbc_paes_crypt(desc, CPACF_DECRYPT, &walk);
> > + return cbc_paes_crypt(req, CPACF_DECRYPT);
> > }
> >
> > -static struct crypto_alg cbc_paes_alg = {
> > - .cra_name = "cbc(paes)",
> > - .cra_driver_name = "cbc-paes-s390",
> > - .cra_priority = 402, /* ecb-paes-s390 + 1 */
> > - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> > - .cra_blocksize = AES_BLOCK_SIZE,
> > - .cra_ctxsize = sizeof(struct s390_paes_ctx),
> > - .cra_type = &crypto_blkcipher_type,
> > - .cra_module = THIS_MODULE,
> > - .cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list),
> > - .cra_init = cbc_paes_init,
> > - .cra_exit = cbc_paes_exit,
> > - .cra_u = {
> > - .blkcipher = {
> > - .min_keysize = PAES_MIN_KEYSIZE,
> > - .max_keysize = PAES_MAX_KEYSIZE,
> > - .ivsize = AES_BLOCK_SIZE,
> > - .setkey = cbc_paes_set_key,
> > - .encrypt = cbc_paes_encrypt,
> > - .decrypt = cbc_paes_decrypt,
> > - }
> > - }
> > +static struct skcipher_alg cbc_paes_alg = {
> > + .base.cra_name = "cbc(paes)",
> > + .base.cra_driver_name = "cbc-paes-s390",
> > + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
> > + .base.cra_blocksize = AES_BLOCK_SIZE,
> > + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
> > + .base.cra_module = THIS_MODULE,
> > + .base.cra_list = LIST_HEAD_INIT(cbc_paes_alg.base.cra_list),
> > + .init = cbc_paes_init,
> > + .exit = cbc_paes_exit,
> > + .min_keysize = PAES_MIN_KEYSIZE,
> > + .max_keysize = PAES_MAX_KEYSIZE,
> > + .ivsize = AES_BLOCK_SIZE,
> > + .setkey = cbc_paes_set_key,
> > + .encrypt = cbc_paes_encrypt,
> > + .decrypt = cbc_paes_decrypt,
> > };
> >
> > -static int xts_paes_init(struct crypto_tfm *tfm)
> > +static int xts_paes_init(struct crypto_skcipher *tfm)
> > {
> > - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > ctx->kb[0].key = NULL;
> > ctx->kb[1].key = NULL;
> > @@ -361,9 +334,9 @@ static int xts_paes_init(struct crypto_tfm *tfm)
> > return 0;
> > }
> >
> > -static void xts_paes_exit(struct crypto_tfm *tfm)
> > +static void xts_paes_exit(struct crypto_skcipher *tfm)
> > {
> > - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb[0]);
> > _free_kb_keybuf(&ctx->kb[1]);
> > @@ -391,11 +364,11 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
> > return ctx->fc ? 0 : -EINVAL;
> > }
> >
> > -static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > +static int xts_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> > unsigned int xts_key_len)
> > {
> > int rc;
> > - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
> > u8 ckey[2 * AES_MAX_KEY_SIZE];
> > unsigned int ckey_len, key_len;
> >
> > @@ -414,7 +387,7 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > return rc;
> >
> > if (__xts_paes_set_key(ctx)) {
> > - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> > + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> > return -EINVAL;
> > }
> >
> > @@ -427,13 +400,14 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > AES_KEYSIZE_128 : AES_KEYSIZE_256;
> > memcpy(ckey, ctx->pk[0].protkey, ckey_len);
> > memcpy(ckey + ckey_len, ctx->pk[1].protkey, ckey_len);
> > - return xts_check_key(tfm, ckey, 2*ckey_len);
> > + return xts_verify_key(tfm, ckey, 2*ckey_len);
> > }
> >
> > -static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> > - struct blkcipher_walk *walk)
> > +static int xts_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> > {
> > - struct s390_pxts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> > + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> > + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
> > + struct skcipher_walk walk;
> > unsigned int keylen, offset, nbytes, n, k;
> > int ret;
> > struct {
> > @@ -448,90 +422,76 @@ static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> > u8 init[16];
> > } xts_param;
> >
> > - ret = blkcipher_walk_virt(desc, walk);
> > + ret = skcipher_walk_virt(&walk, req, false);
> > + if (ret)
> > + return ret;
> > keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 48 : 64;
> > offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 16 : 0;
> > retry:
> > memset(&pcc_param, 0, sizeof(pcc_param));
> > - memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
> > + memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
> > memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
> > cpacf_pcc(ctx->fc, pcc_param.key + offset);
> >
> > memcpy(xts_param.key + offset, ctx->pk[0].protkey, keylen);
> > memcpy(xts_param.init, pcc_param.xts, 16);
> >
> > - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> > + while ((nbytes = walk.nbytes) != 0) {
> > /* only use complete blocks */
> > n = nbytes & ~(AES_BLOCK_SIZE - 1);
> > k = cpacf_km(ctx->fc | modifier, xts_param.key + offset,
> > - walk->dst.virt.addr, walk->src.virt.addr, n);
> > + walk.dst.virt.addr, walk.src.virt.addr, n);
> > if (k)
> > - ret = blkcipher_walk_done(desc, walk, nbytes - k);
> > + ret = skcipher_walk_done(&walk, nbytes - k);
> > if (k < n) {
> > if (__xts_paes_set_key(ctx) != 0)
> > - return blkcipher_walk_done(desc, walk, -EIO);
> > + return skcipher_walk_done(&walk, -EIO);
> > goto retry;
> > }
> > }
> > return ret;
> > }
> >
> > -static int xts_paes_encrypt(struct blkcipher_desc *desc,
> > - struct scatterlist *dst, struct scatterlist *src,
> > - unsigned int nbytes)
> > +static int xts_paes_encrypt(struct skcipher_request *req)
> > {
> > - struct blkcipher_walk walk;
> > -
> > - blkcipher_walk_init(&walk, dst, src, nbytes);
> > - return xts_paes_crypt(desc, 0, &walk);
> > + return xts_paes_crypt(req, 0);
> > }
> >
> > -static int xts_paes_decrypt(struct blkcipher_desc *desc,
> > - struct scatterlist *dst, struct scatterlist *src,
> > - unsigned int nbytes)
> > +static int xts_paes_decrypt(struct skcipher_request *req)
> > {
> > - struct blkcipher_walk walk;
> > -
> > - blkcipher_walk_init(&walk, dst, src, nbytes);
> > - return xts_paes_crypt(desc, CPACF_DECRYPT, &walk);
> > + return xts_paes_crypt(req, CPACF_DECRYPT);
> > }
> >
> > -static struct crypto_alg xts_paes_alg = {
> > - .cra_name = "xts(paes)",
> > - .cra_driver_name = "xts-paes-s390",
> > - .cra_priority = 402, /* ecb-paes-s390 + 1 */
> > - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
> > - .cra_blocksize = AES_BLOCK_SIZE,
> > - .cra_ctxsize = sizeof(struct s390_pxts_ctx),
> > - .cra_type = &crypto_blkcipher_type,
> > - .cra_module = THIS_MODULE,
> > - .cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list),
> > - .cra_init = xts_paes_init,
> > - .cra_exit = xts_paes_exit,
> > - .cra_u = {
> > - .blkcipher = {
> > - .min_keysize = 2 * PAES_MIN_KEYSIZE,
> > - .max_keysize = 2 * PAES_MAX_KEYSIZE,
> > - .ivsize = AES_BLOCK_SIZE,
> > - .setkey = xts_paes_set_key,
> > - .encrypt = xts_paes_encrypt,
> > - .decrypt = xts_paes_decrypt,
> > - }
> > - }
> > +static struct skcipher_alg xts_paes_alg = {
> > + .base.cra_name = "xts(paes)",
> > + .base.cra_driver_name = "xts-paes-s390",
> > + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
> > + .base.cra_blocksize = AES_BLOCK_SIZE,
> > + .base.cra_ctxsize = sizeof(struct s390_pxts_ctx),
> > + .base.cra_module = THIS_MODULE,
> > + .base.cra_list = LIST_HEAD_INIT(xts_paes_alg.base.cra_list),
> > + .init = xts_paes_init,
> > + .exit = xts_paes_exit,
> > + .min_keysize = 2 * PAES_MIN_KEYSIZE,
> > + .max_keysize = 2 * PAES_MAX_KEYSIZE,
> > + .ivsize = AES_BLOCK_SIZE,
> > + .setkey = xts_paes_set_key,
> > + .encrypt = xts_paes_encrypt,
> > + .decrypt = xts_paes_decrypt,
> > };
> >
> > -static int ctr_paes_init(struct crypto_tfm *tfm)
> > +static int ctr_paes_init(struct crypto_skcipher *tfm)
> > {
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > ctx->kb.key = NULL;
> >
> > return 0;
> > }
> >
> > -static void ctr_paes_exit(struct crypto_tfm *tfm)
> > +static void ctr_paes_exit(struct crypto_skcipher *tfm)
> > {
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb);
> > }
> > @@ -555,11 +515,11 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
> > return ctx->fc ? 0 : -EINVAL;
> > }
> >
> > -static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > +static int ctr_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
> > unsigned int key_len)
> > {
> > int rc;
> > - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >
> > _free_kb_keybuf(&ctx->kb);
> > rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
> > @@ -567,7 +527,7 @@ static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
> > return rc;
> >
> > if (__ctr_paes_set_key(ctx)) {
> > - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> > + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> > return -EINVAL;
> > }
> > return 0;
> > @@ -588,37 +548,37 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
> > return n;
> > }
> >
> > -static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
> > - struct blkcipher_walk *walk)
> > +static int ctr_paes_crypt(struct skcipher_request *req)
> > {
> > - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
> > + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> > + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> > u8 buf[AES_BLOCK_SIZE], *ctrptr;
> > + struct skcipher_walk walk;
> > unsigned int nbytes, n, k;
> > int ret, locked;
> >
> > locked = spin_trylock(&ctrblk_lock);
> >
> > - ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
> > - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
> > + ret = skcipher_walk_virt(&walk, req, false);
> > + while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
> > n = AES_BLOCK_SIZE;
> > if (nbytes >= 2*AES_BLOCK_SIZE && locked)
> > - n = __ctrblk_init(ctrblk, walk->iv, nbytes);
> > - ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
> > - k = cpacf_kmctr(ctx->fc | modifier, ctx->pk.protkey,
> > - walk->dst.virt.addr, walk->src.virt.addr,
> > - n, ctrptr);
> > + n = __ctrblk_init(ctrblk, walk.iv, nbytes);
> > + ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
> > + k = cpacf_kmctr(ctx->fc, ctx->pk.protkey, walk.dst.virt.addr,
> > + walk.src.virt.addr, n, ctrptr);
> > if (k) {
> > if (ctrptr == ctrblk)
> > - memcpy(walk->iv, ctrptr + k - AES_BLOCK_SIZE,
> > + memcpy(walk.iv, ctrptr + k - AES_BLOCK_SIZE,
> > AES_BLOCK_SIZE);
> > - crypto_inc(walk->iv, AES_BLOCK_SIZE);
> > - ret = blkcipher_walk_done(desc, walk, nbytes - n);
> > + crypto_inc(walk.iv, AES_BLOCK_SIZE);
> > + ret = skcipher_walk_done(&walk, nbytes - n);
>
> Looks like a bug here. It should be
>
> ret = skcipher_walk_done(&walk, nbytes - k);
>
> similar to the other modes.
> You can add this in your patch or leave it to me to provide a separate patch.

I'm not planning to fix this since it's an existing bug, I can't test this code
myself, and the paes code is different from the regular algorithms so it's hard
to work with. So I suggest you provide a patch later.

>
> > }
> > if (k < n) {
> > if (__ctr_paes_set_key(ctx) != 0) {
> > if (locked)
> > spin_unlock(&ctrblk_lock);
> > - return blkcipher_walk_done(desc, walk, -EIO);
> > + return skcipher_walk_done(&walk, -EIO);
> > }
> > }
> > }

Note that __ctr_paes_set_key() is modifying the tfm_ctx which can be shared
between multiple threads. So this code seems broken in other ways too.

How is "paes" tested, given that it isn't covered by the crypto subsystem's
self-tests? How do you know it isn't completely broken?

- Eric

2019-10-18 10:19:42

by Heiko Carstens

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On Wed, Oct 16, 2019 at 11:35:07PM +1100, Herbert Xu wrote:
> > > Eric Biggers (3):
> > > crypto: s390/aes - convert to skcipher API
> > > crypto: s390/paes - convert to skcipher API
> > > crypto: s390/des - convert to skcipher API
> > >
> > > arch/s390/crypto/aes_s390.c | 609 ++++++++++++++---------------------
> > > arch/s390/crypto/des_s390.c | 419 ++++++++++--------------
> > > arch/s390/crypto/paes_s390.c | 414 ++++++++++--------------
> > > 3 files changed, 580 insertions(+), 862 deletions(-)
> >
> > Herbert, should these go upstream via the s390 or crypto tree?
>
> It would be best to go via the crypto tree since any future patches
> to remove blkcipher/ablkcipher would depend on these patches.

Ok, fully agreed. Thanks!

2019-10-18 16:14:13

by Harald Freudenberger

[permalink] [raw]
Subject: Re: [RFT PATCH 2/3] crypto: s390/paes - convert to skcipher API

On 16.10.19 19:05, Eric Biggers wrote:
> On Tue, Oct 15, 2019 at 01:31:39PM +0200, Harald Freudenberger wrote:
>> On 12.10.19 22:18, Eric Biggers wrote:
>>> From: Eric Biggers <[email protected]>
>>>
>>> Convert the glue code for the S390 CPACF protected key implementations
>>> of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated
>>> "blkcipher" API to the "skcipher" API. This is needed in order for the
>>> blkcipher API to be removed.
>>>
>>> Note: I made CTR use the same function for encryption and decryption,
>>> since CTR encryption and decryption are identical.
>>>
>>> Signed-off-by: Eric Biggers <[email protected]>
>>> ---
>>> arch/s390/crypto/paes_s390.c | 414 +++++++++++++++--------------------
>>> 1 file changed, 174 insertions(+), 240 deletions(-)
>>>
>>> diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
>>> index 6184dceed340..c7119c617b6e 100644
>>> --- a/arch/s390/crypto/paes_s390.c
>>> +++ b/arch/s390/crypto/paes_s390.c
>>> @@ -21,6 +21,7 @@
>>> #include <linux/cpufeature.h>
>>> #include <linux/init.h>
>>> #include <linux/spinlock.h>
>>> +#include <crypto/internal/skcipher.h>
>>> #include <crypto/xts.h>
>>> #include <asm/cpacf.h>
>>> #include <asm/pkey.h>
>>> @@ -123,27 +124,27 @@ static int __paes_set_key(struct s390_paes_ctx *ctx)
>>> return ctx->fc ? 0 : -EINVAL;
>>> }
>>>
>>> -static int ecb_paes_init(struct crypto_tfm *tfm)
>>> +static int ecb_paes_init(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> ctx->kb.key = NULL;
>>>
>>> return 0;
>>> }
>>>
>>> -static void ecb_paes_exit(struct crypto_tfm *tfm)
>>> +static void ecb_paes_exit(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb);
>>> }
>>>
>>> -static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> +static int ecb_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
>>> unsigned int key_len)
>>> {
>>> int rc;
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb);
>>> rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
>>> @@ -151,91 +152,75 @@ static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> return rc;
>>>
>>> if (__paes_set_key(ctx)) {
>>> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
>>> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
>>> return -EINVAL;
>>> }
>>> return 0;
>>> }
>>>
>>> -static int ecb_paes_crypt(struct blkcipher_desc *desc,
>>> - unsigned long modifier,
>>> - struct blkcipher_walk *walk)
>>> +static int ecb_paes_crypt(struct skcipher_request *req, unsigned long modifier)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
>>> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>> + struct skcipher_walk walk;
>>> unsigned int nbytes, n, k;
>>> int ret;
>>>
>>> - ret = blkcipher_walk_virt(desc, walk);
>>> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
>>> + ret = skcipher_walk_virt(&walk, req, false);
>>> + while ((nbytes = walk.nbytes) != 0) {
>>> /* only use complete blocks */
>>> n = nbytes & ~(AES_BLOCK_SIZE - 1);
>>> k = cpacf_km(ctx->fc | modifier, ctx->pk.protkey,
>>> - walk->dst.virt.addr, walk->src.virt.addr, n);
>>> + walk.dst.virt.addr, walk.src.virt.addr, n);
>>> if (k)
>>> - ret = blkcipher_walk_done(desc, walk, nbytes - k);
>>> + ret = skcipher_walk_done(&walk, nbytes - k);
>>> if (k < n) {
>>> if (__paes_set_key(ctx) != 0)
>>> - return blkcipher_walk_done(desc, walk, -EIO);
>>> + return skcipher_walk_done(&walk, -EIO);
>>> }
>>> }
>>> return ret;
>>> }
>>>
>>> -static int ecb_paes_encrypt(struct blkcipher_desc *desc,
>>> - struct scatterlist *dst, struct scatterlist *src,
>>> - unsigned int nbytes)
>>> +static int ecb_paes_encrypt(struct skcipher_request *req)
>>> {
>>> - struct blkcipher_walk walk;
>>> -
>>> - blkcipher_walk_init(&walk, dst, src, nbytes);
>>> - return ecb_paes_crypt(desc, CPACF_ENCRYPT, &walk);
>>> + return ecb_paes_crypt(req, 0);
>>> }
>>>
>>> -static int ecb_paes_decrypt(struct blkcipher_desc *desc,
>>> - struct scatterlist *dst, struct scatterlist *src,
>>> - unsigned int nbytes)
>>> +static int ecb_paes_decrypt(struct skcipher_request *req)
>>> {
>>> - struct blkcipher_walk walk;
>>> -
>>> - blkcipher_walk_init(&walk, dst, src, nbytes);
>>> - return ecb_paes_crypt(desc, CPACF_DECRYPT, &walk);
>>> + return ecb_paes_crypt(req, CPACF_DECRYPT);
>>> }
>>>
>>> -static struct crypto_alg ecb_paes_alg = {
>>> - .cra_name = "ecb(paes)",
>>> - .cra_driver_name = "ecb-paes-s390",
>>> - .cra_priority = 401, /* combo: aes + ecb + 1 */
>>> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
>>> - .cra_blocksize = AES_BLOCK_SIZE,
>>> - .cra_ctxsize = sizeof(struct s390_paes_ctx),
>>> - .cra_type = &crypto_blkcipher_type,
>>> - .cra_module = THIS_MODULE,
>>> - .cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list),
>>> - .cra_init = ecb_paes_init,
>>> - .cra_exit = ecb_paes_exit,
>>> - .cra_u = {
>>> - .blkcipher = {
>>> - .min_keysize = PAES_MIN_KEYSIZE,
>>> - .max_keysize = PAES_MAX_KEYSIZE,
>>> - .setkey = ecb_paes_set_key,
>>> - .encrypt = ecb_paes_encrypt,
>>> - .decrypt = ecb_paes_decrypt,
>>> - }
>>> - }
>>> +static struct skcipher_alg ecb_paes_alg = {
>>> + .base.cra_name = "ecb(paes)",
>>> + .base.cra_driver_name = "ecb-paes-s390",
>>> + .base.cra_priority = 401, /* combo: aes + ecb + 1 */
>>> + .base.cra_blocksize = AES_BLOCK_SIZE,
>>> + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
>>> + .base.cra_module = THIS_MODULE,
>>> + .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
>>> + .init = ecb_paes_init,
>>> + .exit = ecb_paes_exit,
>>> + .min_keysize = PAES_MIN_KEYSIZE,
>>> + .max_keysize = PAES_MAX_KEYSIZE,
>>> + .setkey = ecb_paes_set_key,
>>> + .encrypt = ecb_paes_encrypt,
>>> + .decrypt = ecb_paes_decrypt,
>>> };
>>>
>>> -static int cbc_paes_init(struct crypto_tfm *tfm)
>>> +static int cbc_paes_init(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> ctx->kb.key = NULL;
>>>
>>> return 0;
>>> }
>>>
>>> -static void cbc_paes_exit(struct crypto_tfm *tfm)
>>> +static void cbc_paes_exit(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb);
>>> }
>>> @@ -258,11 +243,11 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
>>> return ctx->fc ? 0 : -EINVAL;
>>> }
>>>
>>> -static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> +static int cbc_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
>>> unsigned int key_len)
>>> {
>>> int rc;
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb);
>>> rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
>>> @@ -270,16 +255,17 @@ static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> return rc;
>>>
>>> if (__cbc_paes_set_key(ctx)) {
>>> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
>>> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
>>> return -EINVAL;
>>> }
>>> return 0;
>>> }
>>>
>>> -static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
>>> - struct blkcipher_walk *walk)
>>> +static int cbc_paes_crypt(struct skcipher_request *req, unsigned long modifier)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
>>> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>> + struct skcipher_walk walk;
>>> unsigned int nbytes, n, k;
>>> int ret;
>>> struct {
>>> @@ -287,73 +273,60 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
>>> u8 key[MAXPROTKEYSIZE];
>>> } param;
>>>
>>> - ret = blkcipher_walk_virt(desc, walk);
>>> - memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
>>> + ret = skcipher_walk_virt(&walk, req, false);
>>> + if (ret)
>>> + return ret;
>>> + memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
>>> memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
>>> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
>>> + while ((nbytes = walk.nbytes) != 0) {
>>> /* only use complete blocks */
>>> n = nbytes & ~(AES_BLOCK_SIZE - 1);
>>> k = cpacf_kmc(ctx->fc | modifier, &param,
>>> - walk->dst.virt.addr, walk->src.virt.addr, n);
>>> - if (k)
>>> - ret = blkcipher_walk_done(desc, walk, nbytes - k);
>>> + walk.dst.virt.addr, walk.src.virt.addr, n);
>>> + if (k) {
>>> + memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
>>> + ret = skcipher_walk_done(&walk, nbytes - k);
>>> + }
>>> if (k < n) {
>>> if (__cbc_paes_set_key(ctx) != 0)
>>> - return blkcipher_walk_done(desc, walk, -EIO);
>>> + return skcipher_walk_done(&walk, -EIO);
>>> memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
>>> }
>>> }
>>> - memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
>>> return ret;
>>> }
>>>
>>> -static int cbc_paes_encrypt(struct blkcipher_desc *desc,
>>> - struct scatterlist *dst, struct scatterlist *src,
>>> - unsigned int nbytes)
>>> +static int cbc_paes_encrypt(struct skcipher_request *req)
>>> {
>>> - struct blkcipher_walk walk;
>>> -
>>> - blkcipher_walk_init(&walk, dst, src, nbytes);
>>> - return cbc_paes_crypt(desc, 0, &walk);
>>> + return cbc_paes_crypt(req, 0);
>>> }
>>>
>>> -static int cbc_paes_decrypt(struct blkcipher_desc *desc,
>>> - struct scatterlist *dst, struct scatterlist *src,
>>> - unsigned int nbytes)
>>> +static int cbc_paes_decrypt(struct skcipher_request *req)
>>> {
>>> - struct blkcipher_walk walk;
>>> -
>>> - blkcipher_walk_init(&walk, dst, src, nbytes);
>>> - return cbc_paes_crypt(desc, CPACF_DECRYPT, &walk);
>>> + return cbc_paes_crypt(req, CPACF_DECRYPT);
>>> }
>>>
>>> -static struct crypto_alg cbc_paes_alg = {
>>> - .cra_name = "cbc(paes)",
>>> - .cra_driver_name = "cbc-paes-s390",
>>> - .cra_priority = 402, /* ecb-paes-s390 + 1 */
>>> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
>>> - .cra_blocksize = AES_BLOCK_SIZE,
>>> - .cra_ctxsize = sizeof(struct s390_paes_ctx),
>>> - .cra_type = &crypto_blkcipher_type,
>>> - .cra_module = THIS_MODULE,
>>> - .cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list),
>>> - .cra_init = cbc_paes_init,
>>> - .cra_exit = cbc_paes_exit,
>>> - .cra_u = {
>>> - .blkcipher = {
>>> - .min_keysize = PAES_MIN_KEYSIZE,
>>> - .max_keysize = PAES_MAX_KEYSIZE,
>>> - .ivsize = AES_BLOCK_SIZE,
>>> - .setkey = cbc_paes_set_key,
>>> - .encrypt = cbc_paes_encrypt,
>>> - .decrypt = cbc_paes_decrypt,
>>> - }
>>> - }
>>> +static struct skcipher_alg cbc_paes_alg = {
>>> + .base.cra_name = "cbc(paes)",
>>> + .base.cra_driver_name = "cbc-paes-s390",
>>> + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
>>> + .base.cra_blocksize = AES_BLOCK_SIZE,
>>> + .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
>>> + .base.cra_module = THIS_MODULE,
>>> + .base.cra_list = LIST_HEAD_INIT(cbc_paes_alg.base.cra_list),
>>> + .init = cbc_paes_init,
>>> + .exit = cbc_paes_exit,
>>> + .min_keysize = PAES_MIN_KEYSIZE,
>>> + .max_keysize = PAES_MAX_KEYSIZE,
>>> + .ivsize = AES_BLOCK_SIZE,
>>> + .setkey = cbc_paes_set_key,
>>> + .encrypt = cbc_paes_encrypt,
>>> + .decrypt = cbc_paes_decrypt,
>>> };
>>>
>>> -static int xts_paes_init(struct crypto_tfm *tfm)
>>> +static int xts_paes_init(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> ctx->kb[0].key = NULL;
>>> ctx->kb[1].key = NULL;
>>> @@ -361,9 +334,9 @@ static int xts_paes_init(struct crypto_tfm *tfm)
>>> return 0;
>>> }
>>>
>>> -static void xts_paes_exit(struct crypto_tfm *tfm)
>>> +static void xts_paes_exit(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb[0]);
>>> _free_kb_keybuf(&ctx->kb[1]);
>>> @@ -391,11 +364,11 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
>>> return ctx->fc ? 0 : -EINVAL;
>>> }
>>>
>>> -static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> +static int xts_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
>>> unsigned int xts_key_len)
>>> {
>>> int rc;
>>> - struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
>>> u8 ckey[2 * AES_MAX_KEY_SIZE];
>>> unsigned int ckey_len, key_len;
>>>
>>> @@ -414,7 +387,7 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> return rc;
>>>
>>> if (__xts_paes_set_key(ctx)) {
>>> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
>>> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
>>> return -EINVAL;
>>> }
>>>
>>> @@ -427,13 +400,14 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> AES_KEYSIZE_128 : AES_KEYSIZE_256;
>>> memcpy(ckey, ctx->pk[0].protkey, ckey_len);
>>> memcpy(ckey + ckey_len, ctx->pk[1].protkey, ckey_len);
>>> - return xts_check_key(tfm, ckey, 2*ckey_len);
>>> + return xts_verify_key(tfm, ckey, 2*ckey_len);
>>> }
>>>
>>> -static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
>>> - struct blkcipher_walk *walk)
>>> +static int xts_paes_crypt(struct skcipher_request *req, unsigned long modifier)
>>> {
>>> - struct s390_pxts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
>>> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>>> + struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
>>> + struct skcipher_walk walk;
>>> unsigned int keylen, offset, nbytes, n, k;
>>> int ret;
>>> struct {
>>> @@ -448,90 +422,76 @@ static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
>>> u8 init[16];
>>> } xts_param;
>>>
>>> - ret = blkcipher_walk_virt(desc, walk);
>>> + ret = skcipher_walk_virt(&walk, req, false);
>>> + if (ret)
>>> + return ret;
>>> keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 48 : 64;
>>> offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 16 : 0;
>>> retry:
>>> memset(&pcc_param, 0, sizeof(pcc_param));
>>> - memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
>>> + memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
>>> memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
>>> cpacf_pcc(ctx->fc, pcc_param.key + offset);
>>>
>>> memcpy(xts_param.key + offset, ctx->pk[0].protkey, keylen);
>>> memcpy(xts_param.init, pcc_param.xts, 16);
>>>
>>> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
>>> + while ((nbytes = walk.nbytes) != 0) {
>>> /* only use complete blocks */
>>> n = nbytes & ~(AES_BLOCK_SIZE - 1);
>>> k = cpacf_km(ctx->fc | modifier, xts_param.key + offset,
>>> - walk->dst.virt.addr, walk->src.virt.addr, n);
>>> + walk.dst.virt.addr, walk.src.virt.addr, n);
>>> if (k)
>>> - ret = blkcipher_walk_done(desc, walk, nbytes - k);
>>> + ret = skcipher_walk_done(&walk, nbytes - k);
>>> if (k < n) {
>>> if (__xts_paes_set_key(ctx) != 0)
>>> - return blkcipher_walk_done(desc, walk, -EIO);
>>> + return skcipher_walk_done(&walk, -EIO);
>>> goto retry;
>>> }
>>> }
>>> return ret;
>>> }
>>>
>>> -static int xts_paes_encrypt(struct blkcipher_desc *desc,
>>> - struct scatterlist *dst, struct scatterlist *src,
>>> - unsigned int nbytes)
>>> +static int xts_paes_encrypt(struct skcipher_request *req)
>>> {
>>> - struct blkcipher_walk walk;
>>> -
>>> - blkcipher_walk_init(&walk, dst, src, nbytes);
>>> - return xts_paes_crypt(desc, 0, &walk);
>>> + return xts_paes_crypt(req, 0);
>>> }
>>>
>>> -static int xts_paes_decrypt(struct blkcipher_desc *desc,
>>> - struct scatterlist *dst, struct scatterlist *src,
>>> - unsigned int nbytes)
>>> +static int xts_paes_decrypt(struct skcipher_request *req)
>>> {
>>> - struct blkcipher_walk walk;
>>> -
>>> - blkcipher_walk_init(&walk, dst, src, nbytes);
>>> - return xts_paes_crypt(desc, CPACF_DECRYPT, &walk);
>>> + return xts_paes_crypt(req, CPACF_DECRYPT);
>>> }
>>>
>>> -static struct crypto_alg xts_paes_alg = {
>>> - .cra_name = "xts(paes)",
>>> - .cra_driver_name = "xts-paes-s390",
>>> - .cra_priority = 402, /* ecb-paes-s390 + 1 */
>>> - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
>>> - .cra_blocksize = AES_BLOCK_SIZE,
>>> - .cra_ctxsize = sizeof(struct s390_pxts_ctx),
>>> - .cra_type = &crypto_blkcipher_type,
>>> - .cra_module = THIS_MODULE,
>>> - .cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list),
>>> - .cra_init = xts_paes_init,
>>> - .cra_exit = xts_paes_exit,
>>> - .cra_u = {
>>> - .blkcipher = {
>>> - .min_keysize = 2 * PAES_MIN_KEYSIZE,
>>> - .max_keysize = 2 * PAES_MAX_KEYSIZE,
>>> - .ivsize = AES_BLOCK_SIZE,
>>> - .setkey = xts_paes_set_key,
>>> - .encrypt = xts_paes_encrypt,
>>> - .decrypt = xts_paes_decrypt,
>>> - }
>>> - }
>>> +static struct skcipher_alg xts_paes_alg = {
>>> + .base.cra_name = "xts(paes)",
>>> + .base.cra_driver_name = "xts-paes-s390",
>>> + .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
>>> + .base.cra_blocksize = AES_BLOCK_SIZE,
>>> + .base.cra_ctxsize = sizeof(struct s390_pxts_ctx),
>>> + .base.cra_module = THIS_MODULE,
>>> + .base.cra_list = LIST_HEAD_INIT(xts_paes_alg.base.cra_list),
>>> + .init = xts_paes_init,
>>> + .exit = xts_paes_exit,
>>> + .min_keysize = 2 * PAES_MIN_KEYSIZE,
>>> + .max_keysize = 2 * PAES_MAX_KEYSIZE,
>>> + .ivsize = AES_BLOCK_SIZE,
>>> + .setkey = xts_paes_set_key,
>>> + .encrypt = xts_paes_encrypt,
>>> + .decrypt = xts_paes_decrypt,
>>> };
>>>
>>> -static int ctr_paes_init(struct crypto_tfm *tfm)
>>> +static int ctr_paes_init(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> ctx->kb.key = NULL;
>>>
>>> return 0;
>>> }
>>>
>>> -static void ctr_paes_exit(struct crypto_tfm *tfm)
>>> +static void ctr_paes_exit(struct crypto_skcipher *tfm)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb);
>>> }
>>> @@ -555,11 +515,11 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
>>> return ctx->fc ? 0 : -EINVAL;
>>> }
>>>
>>> -static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> +static int ctr_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
>>> unsigned int key_len)
>>> {
>>> int rc;
>>> - struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>>
>>> _free_kb_keybuf(&ctx->kb);
>>> rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
>>> @@ -567,7 +527,7 @@ static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
>>> return rc;
>>>
>>> if (__ctr_paes_set_key(ctx)) {
>>> - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
>>> + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
>>> return -EINVAL;
>>> }
>>> return 0;
>>> @@ -588,37 +548,37 @@ static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
>>> return n;
>>> }
>>>
>>> -static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
>>> - struct blkcipher_walk *walk)
>>> +static int ctr_paes_crypt(struct skcipher_request *req)
>>> {
>>> - struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
>>> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>> u8 buf[AES_BLOCK_SIZE], *ctrptr;
>>> + struct skcipher_walk walk;
>>> unsigned int nbytes, n, k;
>>> int ret, locked;
>>>
>>> locked = spin_trylock(&ctrblk_lock);
>>>
>>> - ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
>>> - while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
>>> + ret = skcipher_walk_virt(&walk, req, false);
>>> + while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
>>> n = AES_BLOCK_SIZE;
>>> if (nbytes >= 2*AES_BLOCK_SIZE && locked)
>>> - n = __ctrblk_init(ctrblk, walk->iv, nbytes);
>>> - ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
>>> - k = cpacf_kmctr(ctx->fc | modifier, ctx->pk.protkey,
>>> - walk->dst.virt.addr, walk->src.virt.addr,
>>> - n, ctrptr);
>>> + n = __ctrblk_init(ctrblk, walk.iv, nbytes);
>>> + ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
>>> + k = cpacf_kmctr(ctx->fc, ctx->pk.protkey, walk.dst.virt.addr,
>>> + walk.src.virt.addr, n, ctrptr);
>>> if (k) {
>>> if (ctrptr == ctrblk)
>>> - memcpy(walk->iv, ctrptr + k - AES_BLOCK_SIZE,
>>> + memcpy(walk.iv, ctrptr + k - AES_BLOCK_SIZE,
>>> AES_BLOCK_SIZE);
>>> - crypto_inc(walk->iv, AES_BLOCK_SIZE);
>>> - ret = blkcipher_walk_done(desc, walk, nbytes - n);
>>> + crypto_inc(walk.iv, AES_BLOCK_SIZE);
>>> + ret = skcipher_walk_done(&walk, nbytes - n);
>> Looks like a bug here. It should be
>>
>> ret = skcipher_walk_done(&walk, nbytes - k);
>>
>> similar to the other modes.
>> You can add this in your patch or leave it to me to provide a separate patch.
> I'm not planning to fix this since it's an existing bug, I can't test this code
> myself, and the paes code is different from the regular algorithms so it's hard
> to work with. So I suggest you provide a patch later.
Ok, will do.
>
>>> }
>>> if (k < n) {
>>> if (__ctr_paes_set_key(ctx) != 0) {
>>> if (locked)
>>> spin_unlock(&ctrblk_lock);
>>> - return blkcipher_walk_done(desc, walk, -EIO);
>>> + return skcipher_walk_done(&walk, -EIO);
>>> }
>>> }
>>> }
> Note that __ctr_paes_set_key() is modifying the tfm_ctx which can be shared
> between multiple threads. So this code seems broken in other ways too.
Ok, I'll check this.
>
> How is "paes" tested, given that it isn't covered by the crypto subsystem's
> self-tests? How do you know it isn't completely broken?
>
> - Eric

As you can see paes is an ordinary aes with just a special key - we call it "protected key".
I have written selftests to be integrated into the kernel (will send out patches soon) which
take a known clearkey AES value, derive a protected key with exactly this AES value and
so I can use regular testvectors. I also have testcases which are using the AF_ALG interface
and run common available NIST testvectors. There is also another way to create a protected
key from a secure key (which requires to have crypto cards within the s390). As I have
access to some s390 machines with crypto cards in, I ran all these tests. Have a look into
the pkey code (drivers/s390/crypto/pkey_api.c) for even more details.

regards Harald Freudenberger

keaSo I can test the paes implementation
and in fact the paes implementation is used


2019-10-19 07:56:20

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFT PATCH 0/3] crypto: s390 - convert to skcipher API

On Sat, Oct 12, 2019 at 01:18:06PM -0700, Eric Biggers wrote:
> This series converts the glue code for the S390 CPACF implementations of
> AES, DES, and 3DES modes from the deprecated "blkcipher" API to the
> "skcipher" API. This is needed in order for the blkcipher API to be
> removed.
>
> I've compiled this patchset, and the conversion is very similar to that
> which has been done for many other crypto drivers. But I don't have the
> hardware to test it, nor is S390 CPACF supported by QEMU. So I really
> need someone with the hardware to test it. You can do so by setting:
>
> CONFIG_CRYPTO_HW=y
> CONFIG_ZCRYPT=y
> CONFIG_PKEY=y
> CONFIG_CRYPTO_AES_S390=y
> CONFIG_CRYPTO_PAES_S390=y
> CONFIG_CRYPTO_DES_S390=y
> # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
> CONFIG_DEBUG_KERNEL=y
> CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
> CONFIG_CRYPTO_AES=y
> CONFIG_CRYPTO_DES=y
> CONFIG_CRYPTO_CBC=y
> CONFIG_CRYPTO_CTR=y
> CONFIG_CRYPTO_ECB=y
> CONFIG_CRYPTO_XTS=y
>
> Then boot and check for crypto self-test failures by running
> 'dmesg | grep alg'.
>
> If there are test failures, please also check whether they were already
> failing prior to this patchset.
>
> This won't cover the "paes" ("protected key AES") algorithms, however,
> since those don't have self-tests. If anyone has any way to test those,
> please do so.
>
> Eric Biggers (3):
> crypto: s390/aes - convert to skcipher API
> crypto: s390/paes - convert to skcipher API
> crypto: s390/des - convert to skcipher API
>
> arch/s390/crypto/aes_s390.c | 609 ++++++++++++++---------------------
> arch/s390/crypto/des_s390.c | 419 ++++++++++--------------
> arch/s390/crypto/paes_s390.c | 414 ++++++++++--------------
> 3 files changed, 580 insertions(+), 862 deletions(-)

All applied. Thanks.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt